Feb 19 19:14:49 localhost kernel: Linux version 5.14.0-681.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Wed Feb 11 20:19:22 UTC 2026
Feb 19 19:14:49 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Feb 19 19:14:49 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-681.el9.x86_64 root=UUID=9d578f93-c4e9-4172-8459-ef150e54751c ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Feb 19 19:14:49 localhost kernel: BIOS-provided physical RAM map:
Feb 19 19:14:49 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Feb 19 19:14:49 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Feb 19 19:14:49 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Feb 19 19:14:49 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Feb 19 19:14:49 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Feb 19 19:14:49 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Feb 19 19:14:49 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Feb 19 19:14:49 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Feb 19 19:14:49 localhost kernel: NX (Execute Disable) protection: active
Feb 19 19:14:49 localhost kernel: APIC: Static calls initialized
Feb 19 19:14:49 localhost kernel: SMBIOS 2.8 present.
Feb 19 19:14:49 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Feb 19 19:14:49 localhost kernel: Hypervisor detected: KVM
Feb 19 19:14:49 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Feb 19 19:14:49 localhost kernel: kvm-clock: using sched offset of 8419545595 cycles
Feb 19 19:14:49 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Feb 19 19:14:49 localhost kernel: tsc: Detected 2800.000 MHz processor
Feb 19 19:14:49 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Feb 19 19:14:49 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Feb 19 19:14:49 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Feb 19 19:14:49 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Feb 19 19:14:49 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Feb 19 19:14:49 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Feb 19 19:14:49 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Feb 19 19:14:49 localhost kernel: Using GB pages for direct mapping
Feb 19 19:14:49 localhost kernel: RAMDISK: [mem 0x1b6f6000-0x29b72fff]
Feb 19 19:14:49 localhost kernel: ACPI: Early table checksum verification disabled
Feb 19 19:14:49 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Feb 19 19:14:49 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 19 19:14:49 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 19 19:14:49 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 19 19:14:49 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Feb 19 19:14:49 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 19 19:14:49 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 19 19:14:49 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Feb 19 19:14:49 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Feb 19 19:14:49 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Feb 19 19:14:49 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Feb 19 19:14:49 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Feb 19 19:14:49 localhost kernel: No NUMA configuration found
Feb 19 19:14:49 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Feb 19 19:14:49 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Feb 19 19:14:49 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Feb 19 19:14:49 localhost kernel: Zone ranges:
Feb 19 19:14:49 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Feb 19 19:14:49 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Feb 19 19:14:49 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Feb 19 19:14:49 localhost kernel:   Device   empty
Feb 19 19:14:49 localhost kernel: Movable zone start for each node
Feb 19 19:14:49 localhost kernel: Early memory node ranges
Feb 19 19:14:49 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Feb 19 19:14:49 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Feb 19 19:14:49 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Feb 19 19:14:49 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Feb 19 19:14:49 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Feb 19 19:14:49 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Feb 19 19:14:49 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Feb 19 19:14:49 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Feb 19 19:14:49 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Feb 19 19:14:49 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Feb 19 19:14:49 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Feb 19 19:14:49 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Feb 19 19:14:49 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Feb 19 19:14:49 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Feb 19 19:14:49 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Feb 19 19:14:49 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Feb 19 19:14:49 localhost kernel: TSC deadline timer available
Feb 19 19:14:49 localhost kernel: CPU topo: Max. logical packages:   8
Feb 19 19:14:49 localhost kernel: CPU topo: Max. logical dies:       8
Feb 19 19:14:49 localhost kernel: CPU topo: Max. dies per package:   1
Feb 19 19:14:49 localhost kernel: CPU topo: Max. threads per core:   1
Feb 19 19:14:49 localhost kernel: CPU topo: Num. cores per package:     1
Feb 19 19:14:49 localhost kernel: CPU topo: Num. threads per package:   1
Feb 19 19:14:49 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Feb 19 19:14:49 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Feb 19 19:14:49 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Feb 19 19:14:49 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Feb 19 19:14:49 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Feb 19 19:14:49 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Feb 19 19:14:49 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Feb 19 19:14:49 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Feb 19 19:14:49 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Feb 19 19:14:49 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Feb 19 19:14:49 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Feb 19 19:14:49 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Feb 19 19:14:49 localhost kernel: Booting paravirtualized kernel on KVM
Feb 19 19:14:49 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Feb 19 19:14:49 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Feb 19 19:14:49 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Feb 19 19:14:49 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Feb 19 19:14:49 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Feb 19 19:14:49 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Feb 19 19:14:49 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-681.el9.x86_64 root=UUID=9d578f93-c4e9-4172-8459-ef150e54751c ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Feb 19 19:14:49 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-681.el9.x86_64", will be passed to user space.
Feb 19 19:14:49 localhost kernel: random: crng init done
Feb 19 19:14:49 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Feb 19 19:14:49 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Feb 19 19:14:49 localhost kernel: Fallback order for Node 0: 0 
Feb 19 19:14:49 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Feb 19 19:14:49 localhost kernel: Policy zone: Normal
Feb 19 19:14:49 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb 19 19:14:49 localhost kernel: software IO TLB: area num 8.
Feb 19 19:14:49 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Feb 19 19:14:49 localhost kernel: ftrace: allocating 49565 entries in 194 pages
Feb 19 19:14:49 localhost kernel: ftrace: allocated 194 pages with 3 groups
Feb 19 19:14:49 localhost kernel: Dynamic Preempt: voluntary
Feb 19 19:14:49 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Feb 19 19:14:49 localhost kernel: rcu:         RCU event tracing is enabled.
Feb 19 19:14:49 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Feb 19 19:14:49 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Feb 19 19:14:49 localhost kernel:         Rude variant of Tasks RCU enabled.
Feb 19 19:14:49 localhost kernel:         Tracing variant of Tasks RCU enabled.
Feb 19 19:14:49 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb 19 19:14:49 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Feb 19 19:14:49 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Feb 19 19:14:49 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Feb 19 19:14:49 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Feb 19 19:14:49 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Feb 19 19:14:49 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Feb 19 19:14:49 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Feb 19 19:14:49 localhost kernel: Console: colour VGA+ 80x25
Feb 19 19:14:49 localhost kernel: printk: console [ttyS0] enabled
Feb 19 19:14:49 localhost kernel: ACPI: Core revision 20230331
Feb 19 19:14:49 localhost kernel: APIC: Switch to symmetric I/O mode setup
Feb 19 19:14:49 localhost kernel: x2apic enabled
Feb 19 19:14:49 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Feb 19 19:14:49 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Feb 19 19:14:49 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Feb 19 19:14:49 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Feb 19 19:14:49 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Feb 19 19:14:49 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Feb 19 19:14:49 localhost kernel: mitigations: Enabled attack vectors: user_kernel, user_user, guest_host, guest_guest, SMT mitigations: auto
Feb 19 19:14:49 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Feb 19 19:14:49 localhost kernel: Spectre V2 : Mitigation: Retpolines
Feb 19 19:14:49 localhost kernel: RETBleed: Mitigation: untrained return thunk
Feb 19 19:14:49 localhost kernel: Speculative Return Stack Overflow: Mitigation: SMT disabled
Feb 19 19:14:49 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Feb 19 19:14:49 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Feb 19 19:14:49 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Feb 19 19:14:49 localhost kernel: active return thunk: retbleed_return_thunk
Feb 19 19:14:49 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Feb 19 19:14:49 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Feb 19 19:14:49 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Feb 19 19:14:49 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Feb 19 19:14:49 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Feb 19 19:14:49 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Feb 19 19:14:49 localhost kernel: Freeing SMP alternatives memory: 40K
Feb 19 19:14:49 localhost kernel: pid_max: default: 32768 minimum: 301
Feb 19 19:14:49 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Feb 19 19:14:49 localhost kernel: landlock: Up and running.
Feb 19 19:14:49 localhost kernel: Yama: becoming mindful.
Feb 19 19:14:49 localhost kernel: SELinux:  Initializing.
Feb 19 19:14:49 localhost kernel: LSM support for eBPF active
Feb 19 19:14:49 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb 19 19:14:49 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb 19 19:14:49 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Feb 19 19:14:49 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Feb 19 19:14:49 localhost kernel: ... version:                0
Feb 19 19:14:49 localhost kernel: ... bit width:              48
Feb 19 19:14:49 localhost kernel: ... generic registers:      6
Feb 19 19:14:49 localhost kernel: ... value mask:             0000ffffffffffff
Feb 19 19:14:49 localhost kernel: ... max period:             00007fffffffffff
Feb 19 19:14:49 localhost kernel: ... fixed-purpose events:   0
Feb 19 19:14:49 localhost kernel: ... event mask:             000000000000003f
Feb 19 19:14:49 localhost kernel: signal: max sigframe size: 1776
Feb 19 19:14:49 localhost kernel: rcu: Hierarchical SRCU implementation.
Feb 19 19:14:49 localhost kernel: rcu:         Max phase no-delay instances is 400.
Feb 19 19:14:49 localhost kernel: smp: Bringing up secondary CPUs ...
Feb 19 19:14:49 localhost kernel: smpboot: x86: Booting SMP configuration:
Feb 19 19:14:49 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Feb 19 19:14:49 localhost kernel: smp: Brought up 1 node, 8 CPUs
Feb 19 19:14:49 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Feb 19 19:14:49 localhost kernel: node 0 deferred pages initialised in 11ms
Feb 19 19:14:49 localhost kernel: Memory: 7617684K/8388068K available (16384K kernel code, 5795K rwdata, 13948K rodata, 4204K init, 7180K bss, 764388K reserved, 0K cma-reserved)
Feb 19 19:14:49 localhost kernel: devtmpfs: initialized
Feb 19 19:14:49 localhost kernel: x86/mm: Memory block size: 128MB
Feb 19 19:14:49 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb 19 19:14:49 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Feb 19 19:14:49 localhost kernel: pinctrl core: initialized pinctrl subsystem
Feb 19 19:14:49 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb 19 19:14:49 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Feb 19 19:14:49 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Feb 19 19:14:49 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Feb 19 19:14:49 localhost kernel: audit: initializing netlink subsys (disabled)
Feb 19 19:14:49 localhost kernel: audit: type=2000 audit(1771528488.858:1): state=initialized audit_enabled=0 res=1
Feb 19 19:14:49 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Feb 19 19:14:49 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb 19 19:14:49 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Feb 19 19:14:49 localhost kernel: cpuidle: using governor menu
Feb 19 19:14:49 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb 19 19:14:49 localhost kernel: PCI: Using configuration type 1 for base access
Feb 19 19:14:49 localhost kernel: PCI: Using configuration type 1 for extended access
Feb 19 19:14:49 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Feb 19 19:14:49 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Feb 19 19:14:49 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Feb 19 19:14:49 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Feb 19 19:14:49 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Feb 19 19:14:49 localhost kernel: Demotion targets for Node 0: null
Feb 19 19:14:49 localhost kernel: cryptd: max_cpu_qlen set to 1000
Feb 19 19:14:49 localhost kernel: ACPI: Added _OSI(Module Device)
Feb 19 19:14:49 localhost kernel: ACPI: Added _OSI(Processor Device)
Feb 19 19:14:49 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb 19 19:14:49 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Feb 19 19:14:49 localhost kernel: ACPI: Interpreter enabled
Feb 19 19:14:49 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Feb 19 19:14:49 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Feb 19 19:14:49 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Feb 19 19:14:49 localhost kernel: PCI: Using E820 reservations for host bridge windows
Feb 19 19:14:49 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Feb 19 19:14:49 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Feb 19 19:14:49 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Feb 19 19:14:49 localhost kernel: acpiphp: Slot [3] registered
Feb 19 19:14:49 localhost kernel: acpiphp: Slot [4] registered
Feb 19 19:14:49 localhost kernel: acpiphp: Slot [5] registered
Feb 19 19:14:49 localhost kernel: acpiphp: Slot [6] registered
Feb 19 19:14:49 localhost kernel: acpiphp: Slot [7] registered
Feb 19 19:14:49 localhost kernel: acpiphp: Slot [8] registered
Feb 19 19:14:49 localhost kernel: acpiphp: Slot [9] registered
Feb 19 19:14:49 localhost kernel: acpiphp: Slot [10] registered
Feb 19 19:14:49 localhost kernel: acpiphp: Slot [11] registered
Feb 19 19:14:49 localhost kernel: acpiphp: Slot [12] registered
Feb 19 19:14:49 localhost kernel: acpiphp: Slot [13] registered
Feb 19 19:14:49 localhost kernel: acpiphp: Slot [14] registered
Feb 19 19:14:49 localhost kernel: acpiphp: Slot [15] registered
Feb 19 19:14:49 localhost kernel: acpiphp: Slot [16] registered
Feb 19 19:14:49 localhost kernel: acpiphp: Slot [17] registered
Feb 19 19:14:49 localhost kernel: acpiphp: Slot [18] registered
Feb 19 19:14:49 localhost kernel: acpiphp: Slot [19] registered
Feb 19 19:14:49 localhost kernel: acpiphp: Slot [20] registered
Feb 19 19:14:49 localhost kernel: acpiphp: Slot [21] registered
Feb 19 19:14:49 localhost kernel: acpiphp: Slot [22] registered
Feb 19 19:14:49 localhost kernel: acpiphp: Slot [23] registered
Feb 19 19:14:49 localhost kernel: acpiphp: Slot [24] registered
Feb 19 19:14:49 localhost kernel: acpiphp: Slot [25] registered
Feb 19 19:14:49 localhost kernel: acpiphp: Slot [26] registered
Feb 19 19:14:49 localhost kernel: acpiphp: Slot [27] registered
Feb 19 19:14:49 localhost kernel: acpiphp: Slot [28] registered
Feb 19 19:14:49 localhost kernel: acpiphp: Slot [29] registered
Feb 19 19:14:49 localhost kernel: acpiphp: Slot [30] registered
Feb 19 19:14:49 localhost kernel: acpiphp: Slot [31] registered
Feb 19 19:14:49 localhost kernel: PCI host bridge to bus 0000:00
Feb 19 19:14:49 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Feb 19 19:14:49 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Feb 19 19:14:49 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Feb 19 19:14:49 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Feb 19 19:14:49 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Feb 19 19:14:49 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Feb 19 19:14:49 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Feb 19 19:14:49 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Feb 19 19:14:49 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Feb 19 19:14:49 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Feb 19 19:14:49 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Feb 19 19:14:49 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Feb 19 19:14:49 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Feb 19 19:14:49 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Feb 19 19:14:49 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Feb 19 19:14:49 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Feb 19 19:14:49 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Feb 19 19:14:49 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Feb 19 19:14:49 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Feb 19 19:14:49 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Feb 19 19:14:49 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Feb 19 19:14:49 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Feb 19 19:14:49 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Feb 19 19:14:49 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Feb 19 19:14:49 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Feb 19 19:14:49 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Feb 19 19:14:49 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Feb 19 19:14:49 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Feb 19 19:14:49 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Feb 19 19:14:49 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Feb 19 19:14:49 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Feb 19 19:14:49 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Feb 19 19:14:49 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Feb 19 19:14:49 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Feb 19 19:14:49 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Feb 19 19:14:49 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Feb 19 19:14:49 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Feb 19 19:14:49 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Feb 19 19:14:49 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Feb 19 19:14:49 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Feb 19 19:14:49 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Feb 19 19:14:49 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Feb 19 19:14:49 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Feb 19 19:14:49 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Feb 19 19:14:49 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Feb 19 19:14:49 localhost kernel: iommu: Default domain type: Translated
Feb 19 19:14:49 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Feb 19 19:14:49 localhost kernel: SCSI subsystem initialized
Feb 19 19:14:49 localhost kernel: ACPI: bus type USB registered
Feb 19 19:14:49 localhost kernel: usbcore: registered new interface driver usbfs
Feb 19 19:14:49 localhost kernel: usbcore: registered new interface driver hub
Feb 19 19:14:49 localhost kernel: usbcore: registered new device driver usb
Feb 19 19:14:49 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Feb 19 19:14:49 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Feb 19 19:14:49 localhost kernel: PTP clock support registered
Feb 19 19:14:49 localhost kernel: EDAC MC: Ver: 3.0.0
Feb 19 19:14:49 localhost kernel: NetLabel: Initializing
Feb 19 19:14:49 localhost kernel: NetLabel:  domain hash size = 128
Feb 19 19:14:49 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Feb 19 19:14:49 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Feb 19 19:14:49 localhost kernel: PCI: Using ACPI for IRQ routing
Feb 19 19:14:49 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Feb 19 19:14:49 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Feb 19 19:14:49 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Feb 19 19:14:49 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Feb 19 19:14:49 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Feb 19 19:14:49 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Feb 19 19:14:49 localhost kernel: vgaarb: loaded
Feb 19 19:14:49 localhost kernel: clocksource: Switched to clocksource kvm-clock
Feb 19 19:14:49 localhost kernel: VFS: Disk quotas dquot_6.6.0
Feb 19 19:14:49 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb 19 19:14:49 localhost kernel: pnp: PnP ACPI init
Feb 19 19:14:49 localhost kernel: pnp 00:03: [dma 2]
Feb 19 19:14:49 localhost kernel: pnp: PnP ACPI: found 5 devices
Feb 19 19:14:49 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Feb 19 19:14:49 localhost kernel: NET: Registered PF_INET protocol family
Feb 19 19:14:49 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Feb 19 19:14:49 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Feb 19 19:14:49 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb 19 19:14:49 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Feb 19 19:14:49 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Feb 19 19:14:49 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Feb 19 19:14:49 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Feb 19 19:14:49 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Feb 19 19:14:49 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Feb 19 19:14:49 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb 19 19:14:49 localhost kernel: NET: Registered PF_XDP protocol family
Feb 19 19:14:49 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Feb 19 19:14:49 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Feb 19 19:14:49 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Feb 19 19:14:49 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Feb 19 19:14:49 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Feb 19 19:14:49 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Feb 19 19:14:49 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Feb 19 19:14:49 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Feb 19 19:14:49 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 25273 usecs
Feb 19 19:14:49 localhost kernel: PCI: CLS 0 bytes, default 64
Feb 19 19:14:49 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Feb 19 19:14:49 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Feb 19 19:14:49 localhost kernel: Trying to unpack rootfs image as initramfs...
Feb 19 19:14:49 localhost kernel: ACPI: bus type thunderbolt registered
Feb 19 19:14:49 localhost kernel: Initialise system trusted keyrings
Feb 19 19:14:49 localhost kernel: Key type blacklist registered
Feb 19 19:14:49 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Feb 19 19:14:49 localhost kernel: zbud: loaded
Feb 19 19:14:49 localhost kernel: integrity: Platform Keyring initialized
Feb 19 19:14:49 localhost kernel: integrity: Machine keyring initialized
Feb 19 19:14:49 localhost kernel: Freeing initrd memory: 233972K
Feb 19 19:14:49 localhost kernel: NET: Registered PF_ALG protocol family
Feb 19 19:14:49 localhost kernel: xor: automatically using best checksumming function   avx       
Feb 19 19:14:49 localhost kernel: Key type asymmetric registered
Feb 19 19:14:49 localhost kernel: Asymmetric key parser 'x509' registered
Feb 19 19:14:49 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Feb 19 19:14:49 localhost kernel: io scheduler mq-deadline registered
Feb 19 19:14:49 localhost kernel: io scheduler kyber registered
Feb 19 19:14:49 localhost kernel: io scheduler bfq registered
Feb 19 19:14:49 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Feb 19 19:14:49 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Feb 19 19:14:49 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Feb 19 19:14:49 localhost kernel: ACPI: button: Power Button [PWRF]
Feb 19 19:14:49 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Feb 19 19:14:49 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Feb 19 19:14:49 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Feb 19 19:14:49 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb 19 19:14:49 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Feb 19 19:14:49 localhost kernel: Non-volatile memory driver v1.3
Feb 19 19:14:49 localhost kernel: rdac: device handler registered
Feb 19 19:14:49 localhost kernel: hp_sw: device handler registered
Feb 19 19:14:49 localhost kernel: emc: device handler registered
Feb 19 19:14:49 localhost kernel: alua: device handler registered
Feb 19 19:14:49 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Feb 19 19:14:49 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Feb 19 19:14:49 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Feb 19 19:14:49 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Feb 19 19:14:49 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Feb 19 19:14:49 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Feb 19 19:14:49 localhost kernel: usb usb1: Product: UHCI Host Controller
Feb 19 19:14:49 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-681.el9.x86_64 uhci_hcd
Feb 19 19:14:49 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Feb 19 19:14:49 localhost kernel: hub 1-0:1.0: USB hub found
Feb 19 19:14:49 localhost kernel: hub 1-0:1.0: 2 ports detected
Feb 19 19:14:49 localhost kernel: usbcore: registered new interface driver usbserial_generic
Feb 19 19:14:49 localhost kernel: usbserial: USB Serial support registered for generic
Feb 19 19:14:49 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Feb 19 19:14:49 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Feb 19 19:14:49 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Feb 19 19:14:49 localhost kernel: mousedev: PS/2 mouse device common for all mice
Feb 19 19:14:49 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Feb 19 19:14:49 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Feb 19 19:14:49 localhost kernel: rtc_cmos 00:04: registered as rtc0
Feb 19 19:14:49 localhost kernel: rtc_cmos 00:04: setting system clock to 2026-02-19T19:14:49 UTC (1771528489)
Feb 19 19:14:49 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Feb 19 19:14:49 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Feb 19 19:14:49 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Feb 19 19:14:49 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Feb 19 19:14:49 localhost kernel: usbcore: registered new interface driver usbhid
Feb 19 19:14:49 localhost kernel: usbhid: USB HID core driver
Feb 19 19:14:49 localhost kernel: drop_monitor: Initializing network drop monitor service
Feb 19 19:14:49 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Feb 19 19:14:49 localhost kernel: Initializing XFRM netlink socket
Feb 19 19:14:49 localhost kernel: NET: Registered PF_INET6 protocol family
Feb 19 19:14:49 localhost kernel: Segment Routing with IPv6
Feb 19 19:14:49 localhost kernel: NET: Registered PF_PACKET protocol family
Feb 19 19:14:49 localhost kernel: mpls_gso: MPLS GSO support
Feb 19 19:14:49 localhost kernel: IPI shorthand broadcast: enabled
Feb 19 19:14:49 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Feb 19 19:14:49 localhost kernel: AES CTR mode by8 optimization enabled
Feb 19 19:14:49 localhost kernel: sched_clock: Marking stable (1054002040, 145056888)->(1268992517, -69933589)
Feb 19 19:14:49 localhost kernel: registered taskstats version 1
Feb 19 19:14:49 localhost kernel: Loading compiled-in X.509 certificates
Feb 19 19:14:49 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4fde1469d8033882223ec575c1a1c1b88c9a497b'
Feb 19 19:14:49 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Feb 19 19:14:49 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Feb 19 19:14:49 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Feb 19 19:14:49 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Feb 19 19:14:49 localhost kernel: Demotion targets for Node 0: null
Feb 19 19:14:49 localhost kernel: page_owner is disabled
Feb 19 19:14:49 localhost kernel: Key type .fscrypt registered
Feb 19 19:14:49 localhost kernel: Key type fscrypt-provisioning registered
Feb 19 19:14:49 localhost kernel: Key type big_key registered
Feb 19 19:14:49 localhost kernel: Key type encrypted registered
Feb 19 19:14:49 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Feb 19 19:14:49 localhost kernel: Loading compiled-in module X.509 certificates
Feb 19 19:14:49 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4fde1469d8033882223ec575c1a1c1b88c9a497b'
Feb 19 19:14:49 localhost kernel: ima: Allocated hash algorithm: sha256
Feb 19 19:14:49 localhost kernel: ima: No architecture policies found
Feb 19 19:14:49 localhost kernel: evm: Initialising EVM extended attributes:
Feb 19 19:14:49 localhost kernel: evm: security.selinux
Feb 19 19:14:49 localhost kernel: evm: security.SMACK64 (disabled)
Feb 19 19:14:49 localhost kernel: evm: security.SMACK64EXEC (disabled)
Feb 19 19:14:49 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Feb 19 19:14:49 localhost kernel: evm: security.SMACK64MMAP (disabled)
Feb 19 19:14:49 localhost kernel: evm: security.apparmor (disabled)
Feb 19 19:14:49 localhost kernel: evm: security.ima
Feb 19 19:14:49 localhost kernel: evm: security.capability
Feb 19 19:14:49 localhost kernel: evm: HMAC attrs: 0x1
Feb 19 19:14:49 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Feb 19 19:14:49 localhost kernel: Running certificate verification RSA selftest
Feb 19 19:14:49 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Feb 19 19:14:49 localhost kernel: Running certificate verification ECDSA selftest
Feb 19 19:14:49 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Feb 19 19:14:49 localhost kernel: clk: Disabling unused clocks
Feb 19 19:14:49 localhost kernel: Freeing unused decrypted memory: 2028K
Feb 19 19:14:49 localhost kernel: Freeing unused kernel image (initmem) memory: 4204K
Feb 19 19:14:49 localhost kernel: Write protecting the kernel read-only data: 30720k
Feb 19 19:14:49 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 388K
Feb 19 19:14:49 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Feb 19 19:14:49 localhost kernel: Run /init as init process
Feb 19 19:14:49 localhost kernel:   with arguments:
Feb 19 19:14:49 localhost kernel:     /init
Feb 19 19:14:49 localhost kernel:   with environment:
Feb 19 19:14:49 localhost kernel:     HOME=/
Feb 19 19:14:49 localhost kernel:     TERM=linux
Feb 19 19:14:49 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-681.el9.x86_64
Feb 19 19:14:49 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb 19 19:14:49 localhost systemd[1]: Detected virtualization kvm.
Feb 19 19:14:49 localhost systemd[1]: Detected architecture x86-64.
Feb 19 19:14:49 localhost systemd[1]: Running in initrd.
Feb 19 19:14:49 localhost systemd[1]: No hostname configured, using default hostname.
Feb 19 19:14:49 localhost systemd[1]: Hostname set to <localhost>.
Feb 19 19:14:49 localhost systemd[1]: Initializing machine ID from VM UUID.
Feb 19 19:14:49 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Feb 19 19:14:49 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Feb 19 19:14:49 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Feb 19 19:14:49 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Feb 19 19:14:49 localhost kernel: usb 1-1: Manufacturer: QEMU
Feb 19 19:14:49 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Feb 19 19:14:49 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Feb 19 19:14:49 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Feb 19 19:14:49 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Feb 19 19:14:49 localhost systemd[1]: Reached target Local Encrypted Volumes.
Feb 19 19:14:49 localhost systemd[1]: Reached target Initrd /usr File System.
Feb 19 19:14:49 localhost systemd[1]: Reached target Local File Systems.
Feb 19 19:14:49 localhost systemd[1]: Reached target Path Units.
Feb 19 19:14:49 localhost systemd[1]: Reached target Slice Units.
Feb 19 19:14:49 localhost systemd[1]: Reached target Swaps.
Feb 19 19:14:49 localhost systemd[1]: Reached target Timer Units.
Feb 19 19:14:49 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Feb 19 19:14:49 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Feb 19 19:14:49 localhost systemd[1]: Listening on Journal Socket.
Feb 19 19:14:49 localhost systemd[1]: Listening on udev Control Socket.
Feb 19 19:14:49 localhost systemd[1]: Listening on udev Kernel Socket.
Feb 19 19:14:49 localhost systemd[1]: Reached target Socket Units.
Feb 19 19:14:49 localhost systemd[1]: Starting Create List of Static Device Nodes...
Feb 19 19:14:49 localhost systemd[1]: Starting Journal Service...
Feb 19 19:14:49 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Feb 19 19:14:49 localhost systemd[1]: Starting Apply Kernel Variables...
Feb 19 19:14:49 localhost systemd[1]: Starting Create System Users...
Feb 19 19:14:49 localhost systemd[1]: Starting Setup Virtual Console...
Feb 19 19:14:49 localhost systemd[1]: Finished Create List of Static Device Nodes.
Feb 19 19:14:49 localhost systemd[1]: Finished Apply Kernel Variables.
Feb 19 19:14:49 localhost systemd[1]: Finished Create System Users.
Feb 19 19:14:49 localhost systemd-journald[304]: Journal started
Feb 19 19:14:49 localhost systemd-journald[304]: Runtime Journal (/run/log/journal/ac1ff2642c2d43738723bb8a73a49955) is 8.0M, max 153.6M, 145.6M free.
Feb 19 19:14:49 localhost systemd-sysusers[308]: Creating group 'users' with GID 100.
Feb 19 19:14:49 localhost systemd-sysusers[308]: Creating group 'dbus' with GID 81.
Feb 19 19:14:49 localhost systemd-sysusers[308]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Feb 19 19:14:49 localhost systemd[1]: Started Journal Service.
Feb 19 19:14:50 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Feb 19 19:14:50 localhost systemd[1]: Starting Create Volatile Files and Directories...
Feb 19 19:14:50 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Feb 19 19:14:50 localhost systemd[1]: Finished Create Volatile Files and Directories.
Feb 19 19:14:50 localhost systemd[1]: Finished Setup Virtual Console.
Feb 19 19:14:50 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Feb 19 19:14:50 localhost systemd[1]: Starting dracut cmdline hook...
Feb 19 19:14:50 localhost dracut-cmdline[323]: dracut-9 dracut-057-110.git20260130.el9
Feb 19 19:14:50 localhost dracut-cmdline[323]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-681.el9.x86_64 root=UUID=9d578f93-c4e9-4172-8459-ef150e54751c ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Feb 19 19:14:50 localhost systemd[1]: Finished dracut cmdline hook.
Feb 19 19:14:50 localhost systemd[1]: Starting dracut pre-udev hook...
Feb 19 19:14:50 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb 19 19:14:50 localhost kernel: device-mapper: uevent: version 1.0.3
Feb 19 19:14:50 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Feb 19 19:14:50 localhost kernel: RPC: Registered named UNIX socket transport module.
Feb 19 19:14:50 localhost kernel: RPC: Registered udp transport module.
Feb 19 19:14:50 localhost kernel: RPC: Registered tcp transport module.
Feb 19 19:14:50 localhost kernel: RPC: Registered tcp-with-tls transport module.
Feb 19 19:14:50 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Feb 19 19:14:50 localhost rpc.statd[441]: Version 2.5.4 starting
Feb 19 19:14:50 localhost rpc.statd[441]: Initializing NSM state
Feb 19 19:14:50 localhost rpc.idmapd[446]: Setting log level to 0
Feb 19 19:14:50 localhost systemd[1]: Finished dracut pre-udev hook.
Feb 19 19:14:50 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Feb 19 19:14:50 localhost systemd-udevd[459]: Using default interface naming scheme 'rhel-9.0'.
Feb 19 19:14:50 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Feb 19 19:14:50 localhost systemd[1]: Starting dracut pre-trigger hook...
Feb 19 19:14:50 localhost systemd[1]: Finished dracut pre-trigger hook.
Feb 19 19:14:50 localhost systemd[1]: Starting Coldplug All udev Devices...
Feb 19 19:14:50 localhost systemd[1]: Created slice Slice /system/modprobe.
Feb 19 19:14:50 localhost systemd[1]: Starting Load Kernel Module configfs...
Feb 19 19:14:50 localhost systemd[1]: Finished Coldplug All udev Devices.
Feb 19 19:14:50 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 19 19:14:50 localhost systemd[1]: Finished Load Kernel Module configfs.
Feb 19 19:14:50 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Feb 19 19:14:50 localhost systemd[1]: Reached target Network.
Feb 19 19:14:50 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Feb 19 19:14:50 localhost systemd[1]: Starting dracut initqueue hook...
Feb 19 19:14:50 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Feb 19 19:14:50 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Feb 19 19:14:50 localhost kernel: libata version 3.00 loaded.
Feb 19 19:14:50 localhost kernel: ACPI: bus type drm_connector registered
Feb 19 19:14:50 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Feb 19 19:14:50 localhost kernel:  vda: vda1
Feb 19 19:14:50 localhost systemd-udevd[479]: Network interface NamePolicy= disabled on kernel command line.
Feb 19 19:14:50 localhost kernel: scsi host0: ata_piix
Feb 19 19:14:50 localhost kernel: scsi host1: ata_piix
Feb 19 19:14:50 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Feb 19 19:14:50 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Feb 19 19:14:50 localhost systemd[1]: Found device /dev/disk/by-uuid/9d578f93-c4e9-4172-8459-ef150e54751c.
Feb 19 19:14:50 localhost systemd[1]: Reached target Initrd Root Device.
Feb 19 19:14:50 localhost systemd[1]: Mounting Kernel Configuration File System...
Feb 19 19:14:50 localhost kernel: ata1: found unknown device (class 0)
Feb 19 19:14:50 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Feb 19 19:14:50 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Feb 19 19:14:51 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Feb 19 19:14:51 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Feb 19 19:14:51 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Feb 19 19:14:51 localhost systemd[1]: Mounted Kernel Configuration File System.
Feb 19 19:14:51 localhost kernel: Console: switching to colour dummy device 80x25
Feb 19 19:14:51 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Feb 19 19:14:51 localhost kernel: [drm] features: -context_init
Feb 19 19:14:51 localhost systemd[1]: Reached target System Initialization.
Feb 19 19:14:51 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Feb 19 19:14:51 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Feb 19 19:14:51 localhost systemd[1]: Reached target Basic System.
Feb 19 19:14:51 localhost kernel: [drm] number of scanouts: 1
Feb 19 19:14:51 localhost kernel: [drm] number of cap sets: 0
Feb 19 19:14:51 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Feb 19 19:14:51 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Feb 19 19:14:51 localhost kernel: Console: switching to colour frame buffer device 128x48
Feb 19 19:14:51 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Feb 19 19:14:51 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Feb 19 19:14:51 localhost systemd[1]: Finished dracut initqueue hook.
Feb 19 19:14:51 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Feb 19 19:14:51 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Feb 19 19:14:51 localhost systemd[1]: Reached target Remote File Systems.
Feb 19 19:14:51 localhost systemd[1]: Starting dracut pre-mount hook...
Feb 19 19:14:51 localhost systemd[1]: Finished dracut pre-mount hook.
Feb 19 19:14:51 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/9d578f93-c4e9-4172-8459-ef150e54751c...
Feb 19 19:14:51 localhost systemd-fsck[563]: /usr/sbin/fsck.xfs: XFS file system.
Feb 19 19:14:51 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/9d578f93-c4e9-4172-8459-ef150e54751c.
Feb 19 19:14:51 localhost systemd[1]: Mounting /sysroot...
Feb 19 19:14:51 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Feb 19 19:14:51 localhost kernel: XFS (vda1): Mounting V5 Filesystem 9d578f93-c4e9-4172-8459-ef150e54751c
Feb 19 19:14:51 localhost kernel: XFS (vda1): Ending clean mount
Feb 19 19:14:51 localhost systemd[1]: Mounted /sysroot.
Feb 19 19:14:51 localhost systemd[1]: Reached target Initrd Root File System.
Feb 19 19:14:51 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Feb 19 19:14:51 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb 19 19:14:51 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Feb 19 19:14:51 localhost systemd[1]: Reached target Initrd File Systems.
Feb 19 19:14:51 localhost systemd[1]: Reached target Initrd Default Target.
Feb 19 19:14:51 localhost systemd[1]: Starting dracut mount hook...
Feb 19 19:14:51 localhost systemd[1]: Finished dracut mount hook.
Feb 19 19:14:51 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Feb 19 19:14:51 localhost rpc.idmapd[446]: exiting on signal 15
Feb 19 19:14:51 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Feb 19 19:14:51 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Feb 19 19:14:51 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Feb 19 19:14:51 localhost systemd[1]: Stopped target Network.
Feb 19 19:14:51 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Feb 19 19:14:51 localhost systemd[1]: Stopped target Timer Units.
Feb 19 19:14:51 localhost systemd[1]: dbus.socket: Deactivated successfully.
Feb 19 19:14:51 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Feb 19 19:14:51 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb 19 19:14:51 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Feb 19 19:14:51 localhost systemd[1]: Stopped target Initrd Default Target.
Feb 19 19:14:51 localhost systemd[1]: Stopped target Basic System.
Feb 19 19:14:51 localhost systemd[1]: Stopped target Initrd Root Device.
Feb 19 19:14:51 localhost systemd[1]: Stopped target Initrd /usr File System.
Feb 19 19:14:51 localhost systemd[1]: Stopped target Path Units.
Feb 19 19:14:51 localhost systemd[1]: Stopped target Remote File Systems.
Feb 19 19:14:51 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Feb 19 19:14:51 localhost systemd[1]: Stopped target Slice Units.
Feb 19 19:14:51 localhost systemd[1]: Stopped target Socket Units.
Feb 19 19:14:51 localhost systemd[1]: Stopped target System Initialization.
Feb 19 19:14:51 localhost systemd[1]: Stopped target Local File Systems.
Feb 19 19:14:51 localhost systemd[1]: Stopped target Swaps.
Feb 19 19:14:51 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Feb 19 19:14:51 localhost systemd[1]: Stopped dracut mount hook.
Feb 19 19:14:51 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb 19 19:14:51 localhost systemd[1]: Stopped dracut pre-mount hook.
Feb 19 19:14:51 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Feb 19 19:14:51 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb 19 19:14:51 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Feb 19 19:14:51 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb 19 19:14:51 localhost systemd[1]: Stopped dracut initqueue hook.
Feb 19 19:14:51 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 19 19:14:51 localhost systemd[1]: Stopped Apply Kernel Variables.
Feb 19 19:14:51 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Feb 19 19:14:51 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Feb 19 19:14:51 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb 19 19:14:51 localhost systemd[1]: Stopped Coldplug All udev Devices.
Feb 19 19:14:51 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb 19 19:14:51 localhost systemd[1]: Stopped dracut pre-trigger hook.
Feb 19 19:14:52 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Feb 19 19:14:52 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 19 19:14:52 localhost systemd[1]: Stopped Setup Virtual Console.
Feb 19 19:14:52 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Feb 19 19:14:52 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Feb 19 19:14:52 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb 19 19:14:52 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Feb 19 19:14:52 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb 19 19:14:52 localhost systemd[1]: Closed udev Control Socket.
Feb 19 19:14:52 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb 19 19:14:52 localhost systemd[1]: Closed udev Kernel Socket.
Feb 19 19:14:52 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb 19 19:14:52 localhost systemd[1]: Stopped dracut pre-udev hook.
Feb 19 19:14:52 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb 19 19:14:52 localhost systemd[1]: Stopped dracut cmdline hook.
Feb 19 19:14:52 localhost systemd[1]: Starting Cleanup udev Database...
Feb 19 19:14:52 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Feb 19 19:14:52 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Feb 19 19:14:52 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb 19 19:14:52 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Feb 19 19:14:52 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Feb 19 19:14:52 localhost systemd[1]: Stopped Create System Users.
Feb 19 19:14:52 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Feb 19 19:14:52 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Feb 19 19:14:52 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb 19 19:14:52 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Feb 19 19:14:52 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb 19 19:14:52 localhost systemd[1]: Finished Cleanup udev Database.
Feb 19 19:14:52 localhost systemd[1]: Reached target Switch Root.
Feb 19 19:14:52 localhost systemd[1]: Starting Switch Root...
Feb 19 19:14:52 localhost systemd[1]: Switching root.
Feb 19 19:14:52 localhost systemd-journald[304]: Journal stopped
Feb 19 19:14:52 localhost systemd-journald[304]: Received SIGTERM from PID 1 (systemd).
Feb 19 19:14:52 localhost kernel: audit: type=1404 audit(1771528492.218:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Feb 19 19:14:52 localhost kernel: SELinux:  policy capability network_peer_controls=1
Feb 19 19:14:52 localhost kernel: SELinux:  policy capability open_perms=1
Feb 19 19:14:52 localhost kernel: SELinux:  policy capability extended_socket_class=1
Feb 19 19:14:52 localhost kernel: SELinux:  policy capability always_check_network=0
Feb 19 19:14:52 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 19 19:14:52 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 19 19:14:52 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 19 19:14:52 localhost kernel: audit: type=1403 audit(1771528492.332:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Feb 19 19:14:52 localhost systemd[1]: Successfully loaded SELinux policy in 118.799ms.
Feb 19 19:14:52 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 36.775ms.
Feb 19 19:14:52 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb 19 19:14:52 localhost systemd[1]: Detected virtualization kvm.
Feb 19 19:14:52 localhost systemd[1]: Detected architecture x86-64.
Feb 19 19:14:52 localhost systemd-rc-local-generator[645]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 19:14:52 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Feb 19 19:14:52 localhost systemd[1]: Stopped Switch Root.
Feb 19 19:14:52 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Feb 19 19:14:52 localhost systemd[1]: Created slice Slice /system/getty.
Feb 19 19:14:52 localhost systemd[1]: Created slice Slice /system/serial-getty.
Feb 19 19:14:52 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Feb 19 19:14:52 localhost systemd[1]: Created slice User and Session Slice.
Feb 19 19:14:52 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Feb 19 19:14:52 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Feb 19 19:14:52 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Feb 19 19:14:52 localhost systemd[1]: Reached target Local Encrypted Volumes.
Feb 19 19:14:52 localhost systemd[1]: Stopped target Switch Root.
Feb 19 19:14:52 localhost systemd[1]: Stopped target Initrd File Systems.
Feb 19 19:14:52 localhost systemd[1]: Stopped target Initrd Root File System.
Feb 19 19:14:52 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Feb 19 19:14:52 localhost systemd[1]: Reached target Path Units.
Feb 19 19:14:52 localhost systemd[1]: Reached target rpc_pipefs.target.
Feb 19 19:14:52 localhost systemd[1]: Reached target Slice Units.
Feb 19 19:14:52 localhost systemd[1]: Reached target Swaps.
Feb 19 19:14:52 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Feb 19 19:14:52 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Feb 19 19:14:52 localhost systemd[1]: Reached target RPC Port Mapper.
Feb 19 19:14:52 localhost systemd[1]: Listening on Process Core Dump Socket.
Feb 19 19:14:52 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Feb 19 19:14:52 localhost systemd[1]: Listening on udev Control Socket.
Feb 19 19:14:52 localhost systemd[1]: Listening on udev Kernel Socket.
Feb 19 19:14:52 localhost systemd[1]: Mounting Huge Pages File System...
Feb 19 19:14:52 localhost systemd[1]: Mounting POSIX Message Queue File System...
Feb 19 19:14:52 localhost systemd[1]: Mounting Kernel Debug File System...
Feb 19 19:14:52 localhost systemd[1]: Mounting Kernel Trace File System...
Feb 19 19:14:52 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Feb 19 19:14:52 localhost systemd[1]: Starting Create List of Static Device Nodes...
Feb 19 19:14:52 localhost systemd[1]: Starting Load Kernel Module configfs...
Feb 19 19:14:52 localhost systemd[1]: Starting Load Kernel Module drm...
Feb 19 19:14:52 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Feb 19 19:14:52 localhost systemd[1]: Starting Load Kernel Module fuse...
Feb 19 19:14:52 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Feb 19 19:14:52 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Feb 19 19:14:52 localhost systemd[1]: Stopped File System Check on Root Device.
Feb 19 19:14:52 localhost systemd[1]: Stopped Journal Service.
Feb 19 19:14:52 localhost systemd[1]: Starting Journal Service...
Feb 19 19:14:52 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Feb 19 19:14:52 localhost systemd[1]: Starting Generate network units from Kernel command line...
Feb 19 19:14:52 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb 19 19:14:52 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Feb 19 19:14:52 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Feb 19 19:14:52 localhost systemd[1]: Starting Apply Kernel Variables...
Feb 19 19:14:52 localhost kernel: fuse: init (API version 7.37)
Feb 19 19:14:52 localhost systemd[1]: Starting Coldplug All udev Devices...
Feb 19 19:14:52 localhost systemd-journald[693]: Journal started
Feb 19 19:14:52 localhost systemd-journald[693]: Runtime Journal (/run/log/journal/621707288f5710e1387f73ed6d90e964) is 8.0M, max 153.6M, 145.6M free.
Feb 19 19:14:52 localhost systemd[1]: Queued start job for default target Multi-User System.
Feb 19 19:14:52 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Feb 19 19:14:52 localhost systemd[1]: Started Journal Service.
Feb 19 19:14:52 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Feb 19 19:14:52 localhost systemd[1]: Mounted Huge Pages File System.
Feb 19 19:14:52 localhost systemd[1]: Mounted POSIX Message Queue File System.
Feb 19 19:14:52 localhost systemd[1]: Mounted Kernel Debug File System.
Feb 19 19:14:52 localhost systemd[1]: Mounted Kernel Trace File System.
Feb 19 19:14:52 localhost systemd[1]: Finished Create List of Static Device Nodes.
Feb 19 19:14:52 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 19 19:14:52 localhost systemd[1]: Finished Load Kernel Module configfs.
Feb 19 19:14:52 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 19 19:14:52 localhost systemd[1]: Finished Load Kernel Module drm.
Feb 19 19:14:52 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 19 19:14:52 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Feb 19 19:14:52 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Feb 19 19:14:52 localhost systemd[1]: Finished Load Kernel Module fuse.
Feb 19 19:14:52 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Feb 19 19:14:52 localhost systemd[1]: Finished Generate network units from Kernel command line.
Feb 19 19:14:52 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Feb 19 19:14:52 localhost systemd[1]: Finished Apply Kernel Variables.
Feb 19 19:14:52 localhost systemd[1]: Mounting FUSE Control File System...
Feb 19 19:14:52 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Feb 19 19:14:52 localhost systemd[1]: Starting Rebuild Hardware Database...
Feb 19 19:14:53 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Feb 19 19:14:53 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 19 19:14:53 localhost systemd[1]: Starting Load/Save OS Random Seed...
Feb 19 19:14:53 localhost systemd[1]: Starting Create System Users...
Feb 19 19:14:53 localhost systemd[1]: Mounted FUSE Control File System.
Feb 19 19:14:53 localhost systemd-journald[693]: Runtime Journal (/run/log/journal/621707288f5710e1387f73ed6d90e964) is 8.0M, max 153.6M, 145.6M free.
Feb 19 19:14:53 localhost systemd-journald[693]: Received client request to flush runtime journal.
Feb 19 19:14:53 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Feb 19 19:14:53 localhost systemd[1]: Finished Coldplug All udev Devices.
Feb 19 19:14:53 localhost systemd[1]: Finished Load/Save OS Random Seed.
Feb 19 19:14:53 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Feb 19 19:14:53 localhost systemd[1]: Finished Create System Users.
Feb 19 19:14:53 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Feb 19 19:14:53 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Feb 19 19:14:53 localhost systemd[1]: Reached target Preparation for Local File Systems.
Feb 19 19:14:53 localhost systemd[1]: Reached target Local File Systems.
Feb 19 19:14:53 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Feb 19 19:14:53 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Feb 19 19:14:53 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 19 19:14:53 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Feb 19 19:14:53 localhost systemd[1]: Starting Automatic Boot Loader Update...
Feb 19 19:14:53 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Feb 19 19:14:53 localhost systemd[1]: Starting Create Volatile Files and Directories...
Feb 19 19:14:53 localhost bootctl[711]: Couldn't find EFI system partition, skipping.
Feb 19 19:14:53 localhost systemd[1]: Finished Automatic Boot Loader Update.
Feb 19 19:14:53 localhost systemd[1]: Finished Create Volatile Files and Directories.
Feb 19 19:14:53 localhost systemd[1]: Starting Security Auditing Service...
Feb 19 19:14:53 localhost systemd[1]: Starting RPC Bind...
Feb 19 19:14:53 localhost systemd[1]: Starting Rebuild Journal Catalog...
Feb 19 19:14:53 localhost auditd[717]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Feb 19 19:14:53 localhost auditd[717]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Feb 19 19:14:53 localhost systemd[1]: Finished Rebuild Journal Catalog.
Feb 19 19:14:53 localhost systemd[1]: Started RPC Bind.
Feb 19 19:14:53 localhost augenrules[722]: /sbin/augenrules: No change
Feb 19 19:14:53 localhost augenrules[737]: No rules
Feb 19 19:14:53 localhost augenrules[737]: enabled 1
Feb 19 19:14:53 localhost augenrules[737]: failure 1
Feb 19 19:14:53 localhost augenrules[737]: pid 717
Feb 19 19:14:53 localhost augenrules[737]: rate_limit 0
Feb 19 19:14:53 localhost augenrules[737]: backlog_limit 8192
Feb 19 19:14:53 localhost augenrules[737]: lost 0
Feb 19 19:14:53 localhost augenrules[737]: backlog 3
Feb 19 19:14:53 localhost augenrules[737]: backlog_wait_time 60000
Feb 19 19:14:53 localhost augenrules[737]: backlog_wait_time_actual 0
Feb 19 19:14:53 localhost augenrules[737]: enabled 1
Feb 19 19:14:53 localhost augenrules[737]: failure 1
Feb 19 19:14:53 localhost augenrules[737]: pid 717
Feb 19 19:14:53 localhost augenrules[737]: rate_limit 0
Feb 19 19:14:53 localhost augenrules[737]: backlog_limit 8192
Feb 19 19:14:53 localhost augenrules[737]: lost 0
Feb 19 19:14:53 localhost augenrules[737]: backlog 4
Feb 19 19:14:53 localhost augenrules[737]: backlog_wait_time 60000
Feb 19 19:14:53 localhost augenrules[737]: backlog_wait_time_actual 0
Feb 19 19:14:53 localhost augenrules[737]: enabled 1
Feb 19 19:14:53 localhost augenrules[737]: failure 1
Feb 19 19:14:53 localhost augenrules[737]: pid 717
Feb 19 19:14:53 localhost augenrules[737]: rate_limit 0
Feb 19 19:14:53 localhost augenrules[737]: backlog_limit 8192
Feb 19 19:14:53 localhost augenrules[737]: lost 0
Feb 19 19:14:53 localhost augenrules[737]: backlog 4
Feb 19 19:14:53 localhost augenrules[737]: backlog_wait_time 60000
Feb 19 19:14:53 localhost augenrules[737]: backlog_wait_time_actual 0
Feb 19 19:14:53 localhost systemd[1]: Started Security Auditing Service.
Feb 19 19:14:53 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Feb 19 19:14:53 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Feb 19 19:14:53 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Feb 19 19:14:53 localhost systemd[1]: Finished Rebuild Hardware Database.
Feb 19 19:14:53 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Feb 19 19:14:53 localhost systemd[1]: Starting Update is Completed...
Feb 19 19:14:53 localhost systemd[1]: Finished Update is Completed.
Feb 19 19:14:53 localhost systemd-udevd[745]: Using default interface naming scheme 'rhel-9.0'.
Feb 19 19:14:53 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Feb 19 19:14:53 localhost systemd[1]: Reached target System Initialization.
Feb 19 19:14:53 localhost systemd[1]: Started dnf makecache --timer.
Feb 19 19:14:53 localhost systemd[1]: Started Daily rotation of log files.
Feb 19 19:14:53 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Feb 19 19:14:53 localhost systemd[1]: Reached target Timer Units.
Feb 19 19:14:53 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Feb 19 19:14:53 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Feb 19 19:14:53 localhost systemd[1]: Reached target Socket Units.
Feb 19 19:14:53 localhost systemd[1]: Starting D-Bus System Message Bus...
Feb 19 19:14:53 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb 19 19:14:53 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Feb 19 19:14:53 localhost systemd[1]: Starting Load Kernel Module configfs...
Feb 19 19:14:53 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 19 19:14:53 localhost systemd[1]: Finished Load Kernel Module configfs.
Feb 19 19:14:53 localhost systemd-udevd[748]: Network interface NamePolicy= disabled on kernel command line.
Feb 19 19:14:53 localhost systemd[1]: Started D-Bus System Message Bus.
Feb 19 19:14:53 localhost systemd[1]: Reached target Basic System.
Feb 19 19:14:53 localhost dbus-broker-lau[778]: Ready
Feb 19 19:14:53 localhost systemd[1]: Starting NTP client/server...
Feb 19 19:14:53 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Feb 19 19:14:53 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Feb 19 19:14:53 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Feb 19 19:14:53 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Feb 19 19:14:53 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Feb 19 19:14:53 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Feb 19 19:14:53 localhost chronyd[804]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Feb 19 19:14:53 localhost systemd[1]: Starting IPv4 firewall with iptables...
Feb 19 19:14:53 localhost chronyd[804]: Loaded 0 symmetric keys
Feb 19 19:14:53 localhost chronyd[804]: Using right/UTC timezone to obtain leap second data
Feb 19 19:14:53 localhost chronyd[804]: Loaded seccomp filter (level 2)
Feb 19 19:14:53 localhost systemd[1]: Started irqbalance daemon.
Feb 19 19:14:53 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Feb 19 19:14:53 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 19 19:14:53 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 19 19:14:53 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 19 19:14:53 localhost systemd[1]: Reached target sshd-keygen.target.
Feb 19 19:14:53 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Feb 19 19:14:53 localhost systemd[1]: Reached target User and Group Name Lookups.
Feb 19 19:14:53 localhost systemd[1]: Starting User Login Management...
Feb 19 19:14:53 localhost systemd[1]: Started NTP client/server.
Feb 19 19:14:53 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Feb 19 19:14:54 localhost kernel: kvm_amd: TSC scaling supported
Feb 19 19:14:54 localhost kernel: kvm_amd: Nested Virtualization enabled
Feb 19 19:14:54 localhost kernel: kvm_amd: Nested Paging enabled
Feb 19 19:14:54 localhost kernel: kvm_amd: LBR virtualization supported
Feb 19 19:14:54 localhost systemd-logind[810]: New seat seat0.
Feb 19 19:14:54 localhost systemd-logind[810]: Watching system buttons on /dev/input/event0 (Power Button)
Feb 19 19:14:54 localhost systemd-logind[810]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Feb 19 19:14:54 localhost systemd[1]: Started User Login Management.
Feb 19 19:14:54 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Feb 19 19:14:54 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Feb 19 19:14:54 localhost iptables.init[797]: iptables: Applying firewall rules: [  OK  ]
Feb 19 19:14:54 localhost systemd[1]: Finished IPv4 firewall with iptables.
Feb 19 19:14:54 localhost cloud-init[849]: Cloud-init v. 24.4-8.el9 running 'init-local' at Thu, 19 Feb 2026 19:14:54 +0000. Up 6.00 seconds.
Feb 19 19:14:54 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Feb 19 19:14:54 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Feb 19 19:14:54 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpofj2x24z.mount: Deactivated successfully.
Feb 19 19:14:54 localhost systemd[1]: Starting Hostname Service...
Feb 19 19:14:54 localhost systemd[1]: Started Hostname Service.
Feb 19 19:14:54 np0005624785.novalocal systemd-hostnamed[863]: Hostname set to <np0005624785.novalocal> (static)
Feb 19 19:14:54 np0005624785.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Feb 19 19:14:54 np0005624785.novalocal systemd[1]: Reached target Preparation for Network.
Feb 19 19:14:54 np0005624785.novalocal systemd[1]: Starting Network Manager...
Feb 19 19:14:55 np0005624785.novalocal NetworkManager[867]: <info>  [1771528495.0383] NetworkManager (version 1.54.3-2.el9) is starting... (boot:c101bf30-d7ef-4612-9fa1-9cb228425d0e)
Feb 19 19:14:55 np0005624785.novalocal NetworkManager[867]: <info>  [1771528495.0389] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Feb 19 19:14:55 np0005624785.novalocal NetworkManager[867]: <info>  [1771528495.0613] manager[0x564526c69000]: monitoring kernel firmware directory '/lib/firmware'.
Feb 19 19:14:55 np0005624785.novalocal NetworkManager[867]: <info>  [1771528495.0669] hostname: hostname: using hostnamed
Feb 19 19:14:55 np0005624785.novalocal NetworkManager[867]: <info>  [1771528495.0670] hostname: static hostname changed from (none) to "np0005624785.novalocal"
Feb 19 19:14:55 np0005624785.novalocal NetworkManager[867]: <info>  [1771528495.0675] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Feb 19 19:14:55 np0005624785.novalocal NetworkManager[867]: <info>  [1771528495.0781] manager[0x564526c69000]: rfkill: Wi-Fi hardware radio set enabled
Feb 19 19:14:55 np0005624785.novalocal NetworkManager[867]: <info>  [1771528495.0783] manager[0x564526c69000]: rfkill: WWAN hardware radio set enabled
Feb 19 19:14:55 np0005624785.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Feb 19 19:14:55 np0005624785.novalocal NetworkManager[867]: <info>  [1771528495.0886] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Feb 19 19:14:55 np0005624785.novalocal NetworkManager[867]: <info>  [1771528495.0887] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Feb 19 19:14:55 np0005624785.novalocal NetworkManager[867]: <info>  [1771528495.0887] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Feb 19 19:14:55 np0005624785.novalocal NetworkManager[867]: <info>  [1771528495.0888] manager: Networking is enabled by state file
Feb 19 19:14:55 np0005624785.novalocal NetworkManager[867]: <info>  [1771528495.0890] settings: Loaded settings plugin: keyfile (internal)
Feb 19 19:14:55 np0005624785.novalocal NetworkManager[867]: <info>  [1771528495.0930] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Feb 19 19:14:55 np0005624785.novalocal NetworkManager[867]: <info>  [1771528495.0953] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Feb 19 19:14:55 np0005624785.novalocal NetworkManager[867]: <info>  [1771528495.0967] dhcp: init: Using DHCP client 'internal'
Feb 19 19:14:55 np0005624785.novalocal NetworkManager[867]: <info>  [1771528495.0971] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Feb 19 19:14:55 np0005624785.novalocal NetworkManager[867]: <info>  [1771528495.0985] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 19 19:14:55 np0005624785.novalocal NetworkManager[867]: <info>  [1771528495.0994] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Feb 19 19:14:55 np0005624785.novalocal NetworkManager[867]: <info>  [1771528495.1006] device (lo): Activation: starting connection 'lo' (8f9ecf5e-4818-46e5-a2b3-372e6bc78723)
Feb 19 19:14:55 np0005624785.novalocal NetworkManager[867]: <info>  [1771528495.1014] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Feb 19 19:14:55 np0005624785.novalocal NetworkManager[867]: <info>  [1771528495.1016] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 19 19:14:55 np0005624785.novalocal NetworkManager[867]: <info>  [1771528495.1041] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Feb 19 19:14:55 np0005624785.novalocal NetworkManager[867]: <info>  [1771528495.1046] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Feb 19 19:14:55 np0005624785.novalocal NetworkManager[867]: <info>  [1771528495.1048] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Feb 19 19:14:55 np0005624785.novalocal NetworkManager[867]: <info>  [1771528495.1050] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Feb 19 19:14:55 np0005624785.novalocal NetworkManager[867]: <info>  [1771528495.1053] device (eth0): carrier: link connected
Feb 19 19:14:55 np0005624785.novalocal NetworkManager[867]: <info>  [1771528495.1056] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Feb 19 19:14:55 np0005624785.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 19 19:14:55 np0005624785.novalocal NetworkManager[867]: <info>  [1771528495.1067] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Feb 19 19:14:55 np0005624785.novalocal NetworkManager[867]: <info>  [1771528495.1082] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb 19 19:14:55 np0005624785.novalocal NetworkManager[867]: <info>  [1771528495.1087] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb 19 19:14:55 np0005624785.novalocal NetworkManager[867]: <info>  [1771528495.1088] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 19 19:14:55 np0005624785.novalocal NetworkManager[867]: <info>  [1771528495.1089] manager: NetworkManager state is now CONNECTING
Feb 19 19:14:55 np0005624785.novalocal NetworkManager[867]: <info>  [1771528495.1091] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 19 19:14:55 np0005624785.novalocal systemd[1]: Started Network Manager.
Feb 19 19:14:55 np0005624785.novalocal NetworkManager[867]: <info>  [1771528495.1097] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 19 19:14:55 np0005624785.novalocal NetworkManager[867]: <info>  [1771528495.1100] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 19 19:14:55 np0005624785.novalocal systemd[1]: Reached target Network.
Feb 19 19:14:55 np0005624785.novalocal systemd[1]: Starting Network Manager Wait Online...
Feb 19 19:14:55 np0005624785.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Feb 19 19:14:55 np0005624785.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 19 19:14:55 np0005624785.novalocal NetworkManager[867]: <info>  [1771528495.1273] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Feb 19 19:14:55 np0005624785.novalocal NetworkManager[867]: <info>  [1771528495.1276] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Feb 19 19:14:55 np0005624785.novalocal NetworkManager[867]: <info>  [1771528495.1281] device (lo): Activation: successful, device activated.
Feb 19 19:14:55 np0005624785.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Feb 19 19:14:55 np0005624785.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Feb 19 19:14:55 np0005624785.novalocal systemd[1]: Reached target NFS client services.
Feb 19 19:14:55 np0005624785.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Feb 19 19:14:55 np0005624785.novalocal systemd[1]: Reached target Remote File Systems.
Feb 19 19:14:55 np0005624785.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb 19 19:14:59 np0005624785.novalocal NetworkManager[867]: <info>  [1771528499.1263] dhcp4 (eth0): state changed new lease, address=38.102.83.220
Feb 19 19:14:59 np0005624785.novalocal NetworkManager[867]: <info>  [1771528499.1273] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Feb 19 19:14:59 np0005624785.novalocal NetworkManager[867]: <info>  [1771528499.1296] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 19 19:14:59 np0005624785.novalocal NetworkManager[867]: <info>  [1771528499.1333] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 19 19:14:59 np0005624785.novalocal NetworkManager[867]: <info>  [1771528499.1335] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 19 19:14:59 np0005624785.novalocal NetworkManager[867]: <info>  [1771528499.1339] manager: NetworkManager state is now CONNECTED_SITE
Feb 19 19:14:59 np0005624785.novalocal NetworkManager[867]: <info>  [1771528499.1343] device (eth0): Activation: successful, device activated.
Feb 19 19:14:59 np0005624785.novalocal NetworkManager[867]: <info>  [1771528499.1348] manager: NetworkManager state is now CONNECTED_GLOBAL
Feb 19 19:14:59 np0005624785.novalocal NetworkManager[867]: <info>  [1771528499.1352] manager: startup complete
Feb 19 19:14:59 np0005624785.novalocal systemd[1]: Finished Network Manager Wait Online.
Feb 19 19:14:59 np0005624785.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Feb 19 19:14:59 np0005624785.novalocal cloud-init[930]: Cloud-init v. 24.4-8.el9 running 'init' at Thu, 19 Feb 2026 19:14:59 +0000. Up 10.94 seconds.
Feb 19 19:14:59 np0005624785.novalocal cloud-init[930]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Feb 19 19:14:59 np0005624785.novalocal cloud-init[930]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Feb 19 19:14:59 np0005624785.novalocal cloud-init[930]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Feb 19 19:14:59 np0005624785.novalocal cloud-init[930]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Feb 19 19:14:59 np0005624785.novalocal cloud-init[930]: ci-info: |  eth0  | True |        38.102.83.220         | 255.255.255.0 | global | fa:16:3e:b8:42:3e |
Feb 19 19:14:59 np0005624785.novalocal cloud-init[930]: ci-info: |  eth0  | True | fe80::f816:3eff:feb8:423e/64 |       .       |  link  | fa:16:3e:b8:42:3e |
Feb 19 19:14:59 np0005624785.novalocal cloud-init[930]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Feb 19 19:14:59 np0005624785.novalocal cloud-init[930]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Feb 19 19:14:59 np0005624785.novalocal cloud-init[930]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Feb 19 19:14:59 np0005624785.novalocal cloud-init[930]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Feb 19 19:14:59 np0005624785.novalocal cloud-init[930]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Feb 19 19:14:59 np0005624785.novalocal cloud-init[930]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Feb 19 19:14:59 np0005624785.novalocal cloud-init[930]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Feb 19 19:14:59 np0005624785.novalocal cloud-init[930]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Feb 19 19:14:59 np0005624785.novalocal cloud-init[930]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Feb 19 19:14:59 np0005624785.novalocal cloud-init[930]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Feb 19 19:14:59 np0005624785.novalocal cloud-init[930]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Feb 19 19:14:59 np0005624785.novalocal cloud-init[930]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Feb 19 19:14:59 np0005624785.novalocal cloud-init[930]: ci-info: +-------+-------------+---------+-----------+-------+
Feb 19 19:14:59 np0005624785.novalocal cloud-init[930]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Feb 19 19:14:59 np0005624785.novalocal cloud-init[930]: ci-info: +-------+-------------+---------+-----------+-------+
Feb 19 19:14:59 np0005624785.novalocal cloud-init[930]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Feb 19 19:14:59 np0005624785.novalocal cloud-init[930]: ci-info: |   3   |    local    |    ::   |    eth0   |   U   |
Feb 19 19:14:59 np0005624785.novalocal cloud-init[930]: ci-info: |   4   |  multicast  |    ::   |    eth0   |   U   |
Feb 19 19:14:59 np0005624785.novalocal cloud-init[930]: ci-info: +-------+-------------+---------+-----------+-------+
Feb 19 19:15:00 np0005624785.novalocal useradd[997]: new group: name=cloud-user, GID=1001
Feb 19 19:15:00 np0005624785.novalocal useradd[997]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Feb 19 19:15:00 np0005624785.novalocal useradd[997]: add 'cloud-user' to group 'adm'
Feb 19 19:15:00 np0005624785.novalocal useradd[997]: add 'cloud-user' to group 'systemd-journal'
Feb 19 19:15:00 np0005624785.novalocal useradd[997]: add 'cloud-user' to shadow group 'adm'
Feb 19 19:15:00 np0005624785.novalocal useradd[997]: add 'cloud-user' to shadow group 'systemd-journal'
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: Generating public/private rsa key pair.
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: The key fingerprint is:
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: SHA256:dsOcuBl8YAiwusHqUVdr9Mc7yCynxpL4AypNw8CnYuk root@np0005624785.novalocal
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: The key's randomart image is:
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: +---[RSA 3072]----+
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: |  ...            |
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: |   . . .         |
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: |. .   .oo        |
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: |oo .  ooo=..     |
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: |o++. . oS.Bo     |
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: |o*=.. ..o*o..    |
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: |*+..o o.o= o     |
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: |oEo. + o+   .    |
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: |..  ..+.         |
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: +----[SHA256]-----+
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: Generating public/private ecdsa key pair.
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: The key fingerprint is:
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: SHA256:tKKiAPIm3YQ2wcdhUzLDUs44Q1rhUjXvEx1x0rTY/C8 root@np0005624785.novalocal
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: The key's randomart image is:
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: +---[ECDSA 256]---+
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: |  ==@.. ++o      |
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: | B.*.O . B..     |
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: |o B.= o + +      |
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: | . * . o . .     |
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: |o + . + S   .    |
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: |o+ + . o     .   |
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: |o = o       E .  |
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: |.+ .         .   |
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: |.                |
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: +----[SHA256]-----+
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: Generating public/private ed25519 key pair.
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: The key fingerprint is:
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: SHA256:ptAvUckTc5fzc/uwejWH2VbIM0VAQGZFGFJTLqBsUpI root@np0005624785.novalocal
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: The key's randomart image is:
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: +--[ED25519 256]--+
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: |      ..+ +oXX*o.|
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: |      E= * *+o  .|
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: |      . O   .+.o |
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: |     . + .   .B o|
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: |    . o S      Oo|
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: |     . =      +o=|
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: |      o .      =+|
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: |       .      o .|
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: |            .o   |
Feb 19 19:15:00 np0005624785.novalocal cloud-init[930]: +----[SHA256]-----+
Feb 19 19:15:01 np0005624785.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Feb 19 19:15:01 np0005624785.novalocal systemd[1]: Reached target Cloud-config availability.
Feb 19 19:15:01 np0005624785.novalocal systemd[1]: Reached target Network is Online.
Feb 19 19:15:01 np0005624785.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Feb 19 19:15:01 np0005624785.novalocal systemd[1]: Starting Crash recovery kernel arming...
Feb 19 19:15:01 np0005624785.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Feb 19 19:15:01 np0005624785.novalocal sm-notify[1013]: Version 2.5.4 starting
Feb 19 19:15:01 np0005624785.novalocal systemd[1]: Starting System Logging Service...
Feb 19 19:15:01 np0005624785.novalocal systemd[1]: Starting OpenSSH server daemon...
Feb 19 19:15:01 np0005624785.novalocal systemd[1]: Starting Permit User Sessions...
Feb 19 19:15:01 np0005624785.novalocal systemd[1]: Started Notify NFS peers of a restart.
Feb 19 19:15:01 np0005624785.novalocal systemd[1]: Finished Permit User Sessions.
Feb 19 19:15:01 np0005624785.novalocal sshd[1015]: Server listening on 0.0.0.0 port 22.
Feb 19 19:15:01 np0005624785.novalocal sshd[1015]: Server listening on :: port 22.
Feb 19 19:15:01 np0005624785.novalocal systemd[1]: Started OpenSSH server daemon.
Feb 19 19:15:01 np0005624785.novalocal systemd[1]: Started Command Scheduler.
Feb 19 19:15:01 np0005624785.novalocal systemd[1]: Started Getty on tty1.
Feb 19 19:15:01 np0005624785.novalocal systemd[1]: Started Serial Getty on ttyS0.
Feb 19 19:15:01 np0005624785.novalocal systemd[1]: Reached target Login Prompts.
Feb 19 19:15:01 np0005624785.novalocal crond[1019]: (CRON) STARTUP (1.5.7)
Feb 19 19:15:01 np0005624785.novalocal crond[1019]: (CRON) INFO (Syslog will be used instead of sendmail.)
Feb 19 19:15:01 np0005624785.novalocal crond[1019]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 12% if used.)
Feb 19 19:15:01 np0005624785.novalocal crond[1019]: (CRON) INFO (running with inotify support)
Feb 19 19:15:01 np0005624785.novalocal sshd-session[1018]: Connection closed by 38.102.83.114 port 56726 [preauth]
Feb 19 19:15:01 np0005624785.novalocal rsyslogd[1014]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1014" x-info="https://www.rsyslog.com"] start
Feb 19 19:15:01 np0005624785.novalocal rsyslogd[1014]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Feb 19 19:15:01 np0005624785.novalocal systemd[1]: Started System Logging Service.
Feb 19 19:15:01 np0005624785.novalocal systemd[1]: Reached target Multi-User System.
Feb 19 19:15:01 np0005624785.novalocal sshd-session[1033]: Unable to negotiate with 38.102.83.114 port 56728: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Feb 19 19:15:01 np0005624785.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Feb 19 19:15:01 np0005624785.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Feb 19 19:15:01 np0005624785.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Feb 19 19:15:01 np0005624785.novalocal sshd-session[1049]: Connection reset by 38.102.83.114 port 56732 [preauth]
Feb 19 19:15:01 np0005624785.novalocal sshd-session[1064]: Unable to negotiate with 38.102.83.114 port 56744: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Feb 19 19:15:01 np0005624785.novalocal sshd-session[1073]: Unable to negotiate with 38.102.83.114 port 56760: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Feb 19 19:15:01 np0005624785.novalocal rsyslogd[1014]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 19 19:15:01 np0005624785.novalocal sshd-session[1104]: Connection reset by 38.102.83.114 port 56780 [preauth]
Feb 19 19:15:01 np0005624785.novalocal sshd-session[1124]: Unable to negotiate with 38.102.83.114 port 56784: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Feb 19 19:15:01 np0005624785.novalocal sshd-session[1135]: Unable to negotiate with 38.102.83.114 port 56796: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Feb 19 19:15:01 np0005624785.novalocal sshd-session[1086]: Connection closed by 38.102.83.114 port 56766 [preauth]
Feb 19 19:15:01 np0005624785.novalocal kdumpctl[1025]: kdump: No kdump initial ramdisk found.
Feb 19 19:15:01 np0005624785.novalocal kdumpctl[1025]: kdump: Rebuilding /boot/initramfs-5.14.0-681.el9.x86_64kdump.img
Feb 19 19:15:01 np0005624785.novalocal cloud-init[1214]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Thu, 19 Feb 2026 19:15:01 +0000. Up 12.95 seconds.
Feb 19 19:15:01 np0005624785.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Feb 19 19:15:01 np0005624785.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Feb 19 19:15:01 np0005624785.novalocal cloud-init[1475]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Thu, 19 Feb 2026 19:15:01 +0000. Up 13.36 seconds.
Feb 19 19:15:01 np0005624785.novalocal cloud-init[1506]: #############################################################
Feb 19 19:15:01 np0005624785.novalocal cloud-init[1509]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Feb 19 19:15:01 np0005624785.novalocal cloud-init[1515]: 256 SHA256:tKKiAPIm3YQ2wcdhUzLDUs44Q1rhUjXvEx1x0rTY/C8 root@np0005624785.novalocal (ECDSA)
Feb 19 19:15:01 np0005624785.novalocal cloud-init[1522]: 256 SHA256:ptAvUckTc5fzc/uwejWH2VbIM0VAQGZFGFJTLqBsUpI root@np0005624785.novalocal (ED25519)
Feb 19 19:15:01 np0005624785.novalocal cloud-init[1528]: 3072 SHA256:dsOcuBl8YAiwusHqUVdr9Mc7yCynxpL4AypNw8CnYuk root@np0005624785.novalocal (RSA)
Feb 19 19:15:01 np0005624785.novalocal cloud-init[1529]: -----END SSH HOST KEY FINGERPRINTS-----
Feb 19 19:15:01 np0005624785.novalocal cloud-init[1530]: #############################################################
Feb 19 19:15:01 np0005624785.novalocal dracut[1537]: dracut-057-110.git20260130.el9
Feb 19 19:15:02 np0005624785.novalocal cloud-init[1475]: Cloud-init v. 24.4-8.el9 finished at Thu, 19 Feb 2026 19:15:02 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 13.55 seconds
Feb 19 19:15:02 np0005624785.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Feb 19 19:15:02 np0005624785.novalocal systemd[1]: Reached target Cloud-init target.
Feb 19 19:15:02 np0005624785.novalocal dracut[1539]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/9d578f93-c4e9-4172-8459-ef150e54751c /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-681.el9.x86_64kdump.img 5.14.0-681.el9.x86_64
Feb 19 19:15:02 np0005624785.novalocal dracut[1539]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Feb 19 19:15:02 np0005624785.novalocal dracut[1539]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Feb 19 19:15:02 np0005624785.novalocal dracut[1539]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Feb 19 19:15:02 np0005624785.novalocal dracut[1539]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Feb 19 19:15:02 np0005624785.novalocal dracut[1539]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Feb 19 19:15:02 np0005624785.novalocal dracut[1539]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Feb 19 19:15:02 np0005624785.novalocal dracut[1539]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Feb 19 19:15:02 np0005624785.novalocal dracut[1539]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Feb 19 19:15:02 np0005624785.novalocal dracut[1539]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Feb 19 19:15:02 np0005624785.novalocal dracut[1539]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Feb 19 19:15:02 np0005624785.novalocal dracut[1539]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Feb 19 19:15:02 np0005624785.novalocal dracut[1539]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Feb 19 19:15:02 np0005624785.novalocal dracut[1539]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Feb 19 19:15:02 np0005624785.novalocal dracut[1539]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Feb 19 19:15:02 np0005624785.novalocal dracut[1539]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Feb 19 19:15:02 np0005624785.novalocal dracut[1539]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Feb 19 19:15:02 np0005624785.novalocal dracut[1539]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Feb 19 19:15:02 np0005624785.novalocal dracut[1539]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Feb 19 19:15:02 np0005624785.novalocal dracut[1539]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Feb 19 19:15:02 np0005624785.novalocal dracut[1539]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Feb 19 19:15:02 np0005624785.novalocal dracut[1539]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Feb 19 19:15:02 np0005624785.novalocal dracut[1539]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Feb 19 19:15:02 np0005624785.novalocal dracut[1539]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Feb 19 19:15:02 np0005624785.novalocal dracut[1539]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Feb 19 19:15:02 np0005624785.novalocal dracut[1539]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Feb 19 19:15:02 np0005624785.novalocal dracut[1539]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Feb 19 19:15:03 np0005624785.novalocal dracut[1539]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Feb 19 19:15:03 np0005624785.novalocal dracut[1539]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Feb 19 19:15:03 np0005624785.novalocal dracut[1539]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Feb 19 19:15:03 np0005624785.novalocal dracut[1539]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Feb 19 19:15:03 np0005624785.novalocal dracut[1539]: Module 'resume' will not be installed, because it's in the list to be omitted!
Feb 19 19:15:03 np0005624785.novalocal dracut[1539]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Feb 19 19:15:03 np0005624785.novalocal dracut[1539]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Feb 19 19:15:03 np0005624785.novalocal dracut[1539]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Feb 19 19:15:03 np0005624785.novalocal dracut[1539]: memstrack is not available
Feb 19 19:15:03 np0005624785.novalocal dracut[1539]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Feb 19 19:15:03 np0005624785.novalocal dracut[1539]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Feb 19 19:15:03 np0005624785.novalocal dracut[1539]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Feb 19 19:15:03 np0005624785.novalocal dracut[1539]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Feb 19 19:15:03 np0005624785.novalocal dracut[1539]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Feb 19 19:15:03 np0005624785.novalocal dracut[1539]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Feb 19 19:15:03 np0005624785.novalocal dracut[1539]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Feb 19 19:15:03 np0005624785.novalocal dracut[1539]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Feb 19 19:15:03 np0005624785.novalocal dracut[1539]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Feb 19 19:15:03 np0005624785.novalocal dracut[1539]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Feb 19 19:15:03 np0005624785.novalocal dracut[1539]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Feb 19 19:15:03 np0005624785.novalocal dracut[1539]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Feb 19 19:15:03 np0005624785.novalocal dracut[1539]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Feb 19 19:15:03 np0005624785.novalocal dracut[1539]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Feb 19 19:15:03 np0005624785.novalocal dracut[1539]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Feb 19 19:15:03 np0005624785.novalocal dracut[1539]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Feb 19 19:15:03 np0005624785.novalocal dracut[1539]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Feb 19 19:15:03 np0005624785.novalocal dracut[1539]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Feb 19 19:15:03 np0005624785.novalocal dracut[1539]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Feb 19 19:15:03 np0005624785.novalocal dracut[1539]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Feb 19 19:15:03 np0005624785.novalocal dracut[1539]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Feb 19 19:15:03 np0005624785.novalocal dracut[1539]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Feb 19 19:15:03 np0005624785.novalocal dracut[1539]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Feb 19 19:15:03 np0005624785.novalocal dracut[1539]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Feb 19 19:15:03 np0005624785.novalocal dracut[1539]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Feb 19 19:15:03 np0005624785.novalocal dracut[1539]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Feb 19 19:15:03 np0005624785.novalocal dracut[1539]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Feb 19 19:15:03 np0005624785.novalocal dracut[1539]: memstrack is not available
Feb 19 19:15:03 np0005624785.novalocal dracut[1539]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Feb 19 19:15:03 np0005624785.novalocal dracut[1539]: *** Including module: systemd ***
Feb 19 19:15:04 np0005624785.novalocal dracut[1539]: *** Including module: fips ***
Feb 19 19:15:04 np0005624785.novalocal chronyd[804]: Selected source 23.159.16.194 (2.centos.pool.ntp.org)
Feb 19 19:15:04 np0005624785.novalocal chronyd[804]: System clock TAI offset set to 37 seconds
Feb 19 19:15:04 np0005624785.novalocal dracut[1539]: *** Including module: systemd-initrd ***
Feb 19 19:15:04 np0005624785.novalocal dracut[1539]: *** Including module: i18n ***
Feb 19 19:15:04 np0005624785.novalocal dracut[1539]: *** Including module: drm ***
Feb 19 19:15:04 np0005624785.novalocal irqbalance[806]: Cannot change IRQ 25 affinity: Operation not permitted
Feb 19 19:15:04 np0005624785.novalocal irqbalance[806]: IRQ 25 affinity is now unmanaged
Feb 19 19:15:04 np0005624785.novalocal irqbalance[806]: Cannot change IRQ 31 affinity: Operation not permitted
Feb 19 19:15:04 np0005624785.novalocal irqbalance[806]: IRQ 31 affinity is now unmanaged
Feb 19 19:15:04 np0005624785.novalocal irqbalance[806]: Cannot change IRQ 28 affinity: Operation not permitted
Feb 19 19:15:04 np0005624785.novalocal irqbalance[806]: IRQ 28 affinity is now unmanaged
Feb 19 19:15:04 np0005624785.novalocal irqbalance[806]: Cannot change IRQ 32 affinity: Operation not permitted
Feb 19 19:15:04 np0005624785.novalocal irqbalance[806]: IRQ 32 affinity is now unmanaged
Feb 19 19:15:04 np0005624785.novalocal irqbalance[806]: Cannot change IRQ 30 affinity: Operation not permitted
Feb 19 19:15:04 np0005624785.novalocal irqbalance[806]: IRQ 30 affinity is now unmanaged
Feb 19 19:15:04 np0005624785.novalocal irqbalance[806]: Cannot change IRQ 29 affinity: Operation not permitted
Feb 19 19:15:04 np0005624785.novalocal irqbalance[806]: IRQ 29 affinity is now unmanaged
Feb 19 19:15:04 np0005624785.novalocal dracut[1539]: *** Including module: prefixdevname ***
Feb 19 19:15:04 np0005624785.novalocal dracut[1539]: *** Including module: kernel-modules ***
Feb 19 19:15:04 np0005624785.novalocal kernel: block vda: the capability attribute has been deprecated.
Feb 19 19:15:05 np0005624785.novalocal dracut[1539]: *** Including module: kernel-modules-extra ***
Feb 19 19:15:05 np0005624785.novalocal dracut[1539]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Feb 19 19:15:05 np0005624785.novalocal dracut[1539]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Feb 19 19:15:05 np0005624785.novalocal dracut[1539]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Feb 19 19:15:05 np0005624785.novalocal dracut[1539]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Feb 19 19:15:05 np0005624785.novalocal dracut[1539]: *** Including module: qemu ***
Feb 19 19:15:05 np0005624785.novalocal dracut[1539]: *** Including module: fstab-sys ***
Feb 19 19:15:05 np0005624785.novalocal dracut[1539]: *** Including module: rootfs-block ***
Feb 19 19:15:05 np0005624785.novalocal dracut[1539]: *** Including module: terminfo ***
Feb 19 19:15:05 np0005624785.novalocal dracut[1539]: *** Including module: udev-rules ***
Feb 19 19:15:05 np0005624785.novalocal dracut[1539]: Skipping udev rule: 91-permissions.rules
Feb 19 19:15:05 np0005624785.novalocal dracut[1539]: Skipping udev rule: 80-drivers-modprobe.rules
Feb 19 19:15:06 np0005624785.novalocal dracut[1539]: *** Including module: virtiofs ***
Feb 19 19:15:06 np0005624785.novalocal dracut[1539]: *** Including module: dracut-systemd ***
Feb 19 19:15:06 np0005624785.novalocal dracut[1539]: *** Including module: usrmount ***
Feb 19 19:15:06 np0005624785.novalocal dracut[1539]: *** Including module: base ***
Feb 19 19:15:06 np0005624785.novalocal dracut[1539]: *** Including module: fs-lib ***
Feb 19 19:15:06 np0005624785.novalocal dracut[1539]: *** Including module: kdumpbase ***
Feb 19 19:15:06 np0005624785.novalocal dracut[1539]: *** Including module: microcode_ctl-fw_dir_override ***
Feb 19 19:15:06 np0005624785.novalocal dracut[1539]:   microcode_ctl module: mangling fw_dir
Feb 19 19:15:06 np0005624785.novalocal dracut[1539]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Feb 19 19:15:06 np0005624785.novalocal dracut[1539]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Feb 19 19:15:06 np0005624785.novalocal dracut[1539]:     microcode_ctl: configuration "intel" is ignored
Feb 19 19:15:06 np0005624785.novalocal dracut[1539]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Feb 19 19:15:06 np0005624785.novalocal dracut[1539]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Feb 19 19:15:06 np0005624785.novalocal dracut[1539]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Feb 19 19:15:06 np0005624785.novalocal dracut[1539]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Feb 19 19:15:06 np0005624785.novalocal dracut[1539]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Feb 19 19:15:06 np0005624785.novalocal dracut[1539]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Feb 19 19:15:06 np0005624785.novalocal dracut[1539]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Feb 19 19:15:07 np0005624785.novalocal dracut[1539]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Feb 19 19:15:07 np0005624785.novalocal dracut[1539]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Feb 19 19:15:07 np0005624785.novalocal dracut[1539]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Feb 19 19:15:07 np0005624785.novalocal dracut[1539]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Feb 19 19:15:07 np0005624785.novalocal dracut[1539]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Feb 19 19:15:07 np0005624785.novalocal dracut[1539]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Feb 19 19:15:07 np0005624785.novalocal dracut[1539]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Feb 19 19:15:07 np0005624785.novalocal dracut[1539]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Feb 19 19:15:07 np0005624785.novalocal dracut[1539]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Feb 19 19:15:07 np0005624785.novalocal dracut[1539]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Feb 19 19:15:07 np0005624785.novalocal dracut[1539]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Feb 19 19:15:07 np0005624785.novalocal dracut[1539]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Feb 19 19:15:07 np0005624785.novalocal dracut[1539]: *** Including module: openssl ***
Feb 19 19:15:07 np0005624785.novalocal dracut[1539]: *** Including module: shutdown ***
Feb 19 19:15:07 np0005624785.novalocal dracut[1539]: *** Including module: squash ***
Feb 19 19:15:07 np0005624785.novalocal dracut[1539]: *** Including modules done ***
Feb 19 19:15:07 np0005624785.novalocal dracut[1539]: *** Installing kernel module dependencies ***
Feb 19 19:15:07 np0005624785.novalocal dracut[1539]: *** Installing kernel module dependencies done ***
Feb 19 19:15:07 np0005624785.novalocal dracut[1539]: *** Resolving executable dependencies ***
Feb 19 19:15:09 np0005624785.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 19 19:15:09 np0005624785.novalocal dracut[1539]: *** Resolving executable dependencies done ***
Feb 19 19:15:09 np0005624785.novalocal dracut[1539]: *** Generating early-microcode cpio image ***
Feb 19 19:15:09 np0005624785.novalocal dracut[1539]: *** Store current command line parameters ***
Feb 19 19:15:09 np0005624785.novalocal dracut[1539]: Stored kernel commandline:
Feb 19 19:15:09 np0005624785.novalocal dracut[1539]: No dracut internal kernel commandline stored in the initramfs
Feb 19 19:15:09 np0005624785.novalocal dracut[1539]: *** Install squash loader ***
Feb 19 19:15:10 np0005624785.novalocal dracut[1539]: *** Squashing the files inside the initramfs ***
Feb 19 19:15:11 np0005624785.novalocal dracut[1539]: *** Squashing the files inside the initramfs done ***
Feb 19 19:15:11 np0005624785.novalocal dracut[1539]: *** Creating image file '/boot/initramfs-5.14.0-681.el9.x86_64kdump.img' ***
Feb 19 19:15:11 np0005624785.novalocal dracut[1539]: *** Hardlinking files ***
Feb 19 19:15:11 np0005624785.novalocal dracut[1539]: Mode:           real
Feb 19 19:15:11 np0005624785.novalocal dracut[1539]: Files:          50
Feb 19 19:15:11 np0005624785.novalocal dracut[1539]: Linked:         0 files
Feb 19 19:15:11 np0005624785.novalocal dracut[1539]: Compared:       0 xattrs
Feb 19 19:15:11 np0005624785.novalocal dracut[1539]: Compared:       0 files
Feb 19 19:15:11 np0005624785.novalocal dracut[1539]: Saved:          0 B
Feb 19 19:15:11 np0005624785.novalocal dracut[1539]: Duration:       0.000295 seconds
Feb 19 19:15:11 np0005624785.novalocal dracut[1539]: *** Hardlinking files done ***
Feb 19 19:15:11 np0005624785.novalocal dracut[1539]: *** Creating initramfs image file '/boot/initramfs-5.14.0-681.el9.x86_64kdump.img' done ***
Feb 19 19:15:11 np0005624785.novalocal kdumpctl[1025]: kdump: kexec: loaded kdump kernel
Feb 19 19:15:11 np0005624785.novalocal kdumpctl[1025]: kdump: Starting kdump: [OK]
Feb 19 19:15:12 np0005624785.novalocal systemd[1]: Finished Crash recovery kernel arming.
Feb 19 19:15:12 np0005624785.novalocal systemd[1]: Startup finished in 1.308s (kernel) + 2.420s (initrd) + 19.799s (userspace) = 23.529s.
Feb 19 19:15:16 np0005624785.novalocal sshd-session[4797]: Accepted publickey for zuul from 38.102.83.114 port 39380 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Feb 19 19:15:16 np0005624785.novalocal systemd[1]: Created slice User Slice of UID 1000.
Feb 19 19:15:16 np0005624785.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Feb 19 19:15:16 np0005624785.novalocal systemd-logind[810]: New session 1 of user zuul.
Feb 19 19:15:16 np0005624785.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Feb 19 19:15:16 np0005624785.novalocal systemd[1]: Starting User Manager for UID 1000...
Feb 19 19:15:16 np0005624785.novalocal systemd[4801]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 19 19:15:16 np0005624785.novalocal systemd[4801]: Queued start job for default target Main User Target.
Feb 19 19:15:16 np0005624785.novalocal systemd[4801]: Created slice User Application Slice.
Feb 19 19:15:16 np0005624785.novalocal systemd[4801]: Started Mark boot as successful after the user session has run 2 minutes.
Feb 19 19:15:16 np0005624785.novalocal systemd[4801]: Started Daily Cleanup of User's Temporary Directories.
Feb 19 19:15:16 np0005624785.novalocal systemd[4801]: Reached target Paths.
Feb 19 19:15:16 np0005624785.novalocal systemd[4801]: Reached target Timers.
Feb 19 19:15:16 np0005624785.novalocal systemd[4801]: Starting D-Bus User Message Bus Socket...
Feb 19 19:15:16 np0005624785.novalocal systemd[4801]: Starting Create User's Volatile Files and Directories...
Feb 19 19:15:16 np0005624785.novalocal systemd[4801]: Finished Create User's Volatile Files and Directories.
Feb 19 19:15:16 np0005624785.novalocal systemd[4801]: Listening on D-Bus User Message Bus Socket.
Feb 19 19:15:16 np0005624785.novalocal systemd[4801]: Reached target Sockets.
Feb 19 19:15:16 np0005624785.novalocal systemd[4801]: Reached target Basic System.
Feb 19 19:15:16 np0005624785.novalocal systemd[4801]: Reached target Main User Target.
Feb 19 19:15:16 np0005624785.novalocal systemd[4801]: Startup finished in 133ms.
Feb 19 19:15:16 np0005624785.novalocal systemd[1]: Started User Manager for UID 1000.
Feb 19 19:15:16 np0005624785.novalocal systemd[1]: Started Session 1 of User zuul.
Feb 19 19:15:16 np0005624785.novalocal sshd-session[4797]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 19 19:15:17 np0005624785.novalocal python3[4884]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 19 19:15:19 np0005624785.novalocal python3[4912]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 19 19:15:25 np0005624785.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 19 19:15:25 np0005624785.novalocal python3[4972]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 19 19:15:26 np0005624785.novalocal python3[5012]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Feb 19 19:15:28 np0005624785.novalocal python3[5038]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCiUSme1/hBDr5OcNicHZn3c9PMJj9GPA0cczAuCKJBiAqLb1mKr8PgvzovkGxryTIhMrEb/BKgf5zDTsg2F/f+N39r/939HTWoHpZsmzLJEagu1GRVSgRSCEWuMrfSILwwxHKBYTzCc5nUtINWj1gMajWtR6BlOkBF6oQYOz5RWCxuje8SKHy0Kflb/jDClTqH7ii+XcqH402gsFSckAyr9jphgbc0uwcrFRHrjKNGbARKZoWGZTpDBS41t3cjeng3jmR/kwIhpAVqv1l8aPpeib1MLRzcUQD/+2CeuCnC5N8vwwIcjSBXzylWUvKJmIUDGxp5SnwsFDJlPk/sRYnOgXQ8FYxDdGFhj5LiFMqOHfKfZMD1MZcryG/0e6UHPzaW0/K+mEIc4LxbpPBR/SaA9ZtxFb8Z74blGKxsfZabagjbt1ed6OSaWdRjyFbCdxjW58rRlSLkeVeqXif+J5Jy5zK04es4gSU6E71rvK5KMmyFYE01KmzCIL1Qv93q5P0= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 19 19:15:29 np0005624785.novalocal python3[5062]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:15:29 np0005624785.novalocal python3[5161]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 19 19:15:30 np0005624785.novalocal python3[5232]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771528529.5597177-207-201585432439934/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=ed60c6e4cbfd4ec183eaa29a879cb936_id_rsa follow=False checksum=5c00c1d908b7afba1ad7ae5fa41019f36ba94d9f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:15:31 np0005624785.novalocal python3[5355]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 19 19:15:31 np0005624785.novalocal python3[5426]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771528531.1206386-240-48633064751538/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=ed60c6e4cbfd4ec183eaa29a879cb936_id_rsa.pub follow=False checksum=5c8d831a0628d8b578750cbbff78e69bc773a853 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:15:33 np0005624785.novalocal python3[5474]: ansible-ping Invoked with data=pong
Feb 19 19:15:34 np0005624785.novalocal python3[5498]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 19 19:15:34 np0005624785.novalocal irqbalance[806]: Cannot change IRQ 27 affinity: Operation not permitted
Feb 19 19:15:34 np0005624785.novalocal irqbalance[806]: IRQ 27 affinity is now unmanaged
Feb 19 19:15:35 np0005624785.novalocal python3[5556]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Feb 19 19:15:36 np0005624785.novalocal python3[5588]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:15:36 np0005624785.novalocal python3[5612]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:15:37 np0005624785.novalocal python3[5636]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:15:37 np0005624785.novalocal python3[5660]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:15:37 np0005624785.novalocal python3[5684]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:15:38 np0005624785.novalocal python3[5708]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:15:39 np0005624785.novalocal sudo[5732]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptdcszaptymehfpfjfgqtrmzxxvzgoqr ; /usr/bin/python3'
Feb 19 19:15:39 np0005624785.novalocal sudo[5732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:15:40 np0005624785.novalocal python3[5734]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:15:40 np0005624785.novalocal sudo[5732]: pam_unix(sudo:session): session closed for user root
Feb 19 19:15:40 np0005624785.novalocal sudo[5810]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssugxgaivqkukuotiznzdtvkrqjmkxat ; /usr/bin/python3'
Feb 19 19:15:40 np0005624785.novalocal sudo[5810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:15:40 np0005624785.novalocal python3[5812]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 19 19:15:40 np0005624785.novalocal sudo[5810]: pam_unix(sudo:session): session closed for user root
Feb 19 19:15:41 np0005624785.novalocal sudo[5883]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxdwllitnpgkwlcfxulfkdmdchjgeqlz ; /usr/bin/python3'
Feb 19 19:15:41 np0005624785.novalocal sudo[5883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:15:41 np0005624785.novalocal python3[5885]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1771528540.370474-21-280063034268870/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:15:41 np0005624785.novalocal sudo[5883]: pam_unix(sudo:session): session closed for user root
Feb 19 19:15:42 np0005624785.novalocal python3[5933]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 19 19:15:42 np0005624785.novalocal python3[5957]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 19 19:15:43 np0005624785.novalocal python3[5981]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 19 19:15:43 np0005624785.novalocal python3[6005]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 19 19:15:43 np0005624785.novalocal python3[6029]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 19 19:15:43 np0005624785.novalocal python3[6053]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 19 19:15:44 np0005624785.novalocal python3[6077]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 19 19:15:44 np0005624785.novalocal python3[6101]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 19 19:15:44 np0005624785.novalocal python3[6125]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 19 19:15:45 np0005624785.novalocal python3[6149]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 19 19:15:45 np0005624785.novalocal python3[6173]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 19 19:15:45 np0005624785.novalocal python3[6197]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 19 19:15:45 np0005624785.novalocal python3[6221]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICWBreHW95Wz2Toz5YwCGQwFcUG8oFYkienDh9tntmDc ralfieri@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 19 19:15:46 np0005624785.novalocal python3[6245]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 19 19:15:46 np0005624785.novalocal python3[6269]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 19 19:15:46 np0005624785.novalocal python3[6293]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 19 19:15:46 np0005624785.novalocal python3[6317]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 19 19:15:47 np0005624785.novalocal python3[6341]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 19 19:15:47 np0005624785.novalocal python3[6365]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 19 19:15:47 np0005624785.novalocal python3[6389]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 19 19:15:48 np0005624785.novalocal python3[6413]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 19 19:15:48 np0005624785.novalocal python3[6437]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 19 19:15:48 np0005624785.novalocal python3[6461]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 19 19:15:48 np0005624785.novalocal python3[6485]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 19 19:15:49 np0005624785.novalocal python3[6509]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 19 19:15:49 np0005624785.novalocal python3[6533]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 19 19:15:52 np0005624785.novalocal sudo[6557]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swcsyyasbnihfkofshnceinrfydqemos ; /usr/bin/python3'
Feb 19 19:15:52 np0005624785.novalocal sudo[6557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:15:52 np0005624785.novalocal python3[6559]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Feb 19 19:15:52 np0005624785.novalocal systemd[1]: Starting Time & Date Service...
Feb 19 19:15:52 np0005624785.novalocal systemd[1]: Started Time & Date Service.
Feb 19 19:15:52 np0005624785.novalocal systemd-timedated[6561]: Changed time zone to 'UTC' (UTC).
Feb 19 19:15:52 np0005624785.novalocal sudo[6557]: pam_unix(sudo:session): session closed for user root
Feb 19 19:15:52 np0005624785.novalocal sudo[6588]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrsfzlbcieevdyelizykajhktpytdxyf ; /usr/bin/python3'
Feb 19 19:15:52 np0005624785.novalocal sudo[6588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:15:53 np0005624785.novalocal python3[6590]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:15:53 np0005624785.novalocal sudo[6588]: pam_unix(sudo:session): session closed for user root
Feb 19 19:15:53 np0005624785.novalocal python3[6666]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 19 19:15:53 np0005624785.novalocal python3[6737]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1771528553.232329-153-84066786008840/source _original_basename=tmpvxrlhmso follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:15:54 np0005624785.novalocal python3[6837]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 19 19:15:54 np0005624785.novalocal python3[6908]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1771528554.0319319-183-7246156251877/source _original_basename=tmpxxly9mdp follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:15:55 np0005624785.novalocal sudo[7008]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpqzlcdzccrvfyddtrftvwstiqrwqhrd ; /usr/bin/python3'
Feb 19 19:15:55 np0005624785.novalocal sudo[7008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:15:55 np0005624785.novalocal python3[7010]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 19 19:15:55 np0005624785.novalocal sudo[7008]: pam_unix(sudo:session): session closed for user root
Feb 19 19:15:55 np0005624785.novalocal sudo[7081]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bztgqdwgkxtnabsaqnzfdtzysaukjmmv ; /usr/bin/python3'
Feb 19 19:15:55 np0005624785.novalocal sudo[7081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:15:55 np0005624785.novalocal python3[7083]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1771528555.1341279-231-71249103717682/source _original_basename=tmp39ynq_cb follow=False checksum=a6f52f721c1f41e1ab4b26154f6ef99f64686d2d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:15:55 np0005624785.novalocal sudo[7081]: pam_unix(sudo:session): session closed for user root
Feb 19 19:15:56 np0005624785.novalocal python3[7131]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:15:56 np0005624785.novalocal python3[7157]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:15:56 np0005624785.novalocal sudo[7235]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otzpavknjrcnybwbjxybpjwpuvhrofrl ; /usr/bin/python3'
Feb 19 19:15:56 np0005624785.novalocal sudo[7235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:15:56 np0005624785.novalocal python3[7237]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 19 19:15:56 np0005624785.novalocal sudo[7235]: pam_unix(sudo:session): session closed for user root
Feb 19 19:15:57 np0005624785.novalocal sudo[7308]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izasrgeeycwnnjzvspztduilcskzswvx ; /usr/bin/python3'
Feb 19 19:15:57 np0005624785.novalocal sudo[7308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:15:57 np0005624785.novalocal python3[7310]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1771528556.7288818-273-173189834829449/source _original_basename=tmp_obyxqbq follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:15:57 np0005624785.novalocal sudo[7308]: pam_unix(sudo:session): session closed for user root
Feb 19 19:15:57 np0005624785.novalocal sudo[7359]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksyxeqpcnzgycsnewpvezpqpqszdfgur ; /usr/bin/python3'
Feb 19 19:15:57 np0005624785.novalocal sudo[7359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:15:57 np0005624785.novalocal python3[7361]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-c207-bc7f-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:15:57 np0005624785.novalocal sudo[7359]: pam_unix(sudo:session): session closed for user root
Feb 19 19:15:58 np0005624785.novalocal python3[7389]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-c207-bc7f-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Feb 19 19:15:59 np0005624785.novalocal python3[7417]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:16:15 np0005624785.novalocal sudo[7441]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxcsuppbwdhborpqoqarfwibglczeojq ; /usr/bin/python3'
Feb 19 19:16:15 np0005624785.novalocal sudo[7441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:16:16 np0005624785.novalocal python3[7443]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:16:16 np0005624785.novalocal sudo[7441]: pam_unix(sudo:session): session closed for user root
Feb 19 19:16:22 np0005624785.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Feb 19 19:16:51 np0005624785.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Feb 19 19:16:51 np0005624785.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Feb 19 19:16:51 np0005624785.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Feb 19 19:16:51 np0005624785.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Feb 19 19:16:51 np0005624785.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Feb 19 19:16:51 np0005624785.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Feb 19 19:16:51 np0005624785.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Feb 19 19:16:51 np0005624785.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Feb 19 19:16:51 np0005624785.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Feb 19 19:16:51 np0005624785.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Feb 19 19:16:51 np0005624785.novalocal NetworkManager[867]: <info>  [1771528611.9704] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Feb 19 19:16:51 np0005624785.novalocal systemd-udevd[7448]: Network interface NamePolicy= disabled on kernel command line.
Feb 19 19:16:51 np0005624785.novalocal NetworkManager[867]: <info>  [1771528611.9881] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 19 19:16:51 np0005624785.novalocal NetworkManager[867]: <info>  [1771528611.9909] settings: (eth1): created default wired connection 'Wired connection 1'
Feb 19 19:16:51 np0005624785.novalocal NetworkManager[867]: <info>  [1771528611.9911] device (eth1): carrier: link connected
Feb 19 19:16:51 np0005624785.novalocal NetworkManager[867]: <info>  [1771528611.9913] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Feb 19 19:16:51 np0005624785.novalocal NetworkManager[867]: <info>  [1771528611.9918] policy: auto-activating connection 'Wired connection 1' (5ffa268c-a4ee-37a5-8060-0039bff52aa4)
Feb 19 19:16:51 np0005624785.novalocal NetworkManager[867]: <info>  [1771528611.9922] device (eth1): Activation: starting connection 'Wired connection 1' (5ffa268c-a4ee-37a5-8060-0039bff52aa4)
Feb 19 19:16:51 np0005624785.novalocal NetworkManager[867]: <info>  [1771528611.9923] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 19 19:16:51 np0005624785.novalocal NetworkManager[867]: <info>  [1771528611.9926] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 19 19:16:51 np0005624785.novalocal NetworkManager[867]: <info>  [1771528611.9930] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 19 19:16:51 np0005624785.novalocal NetworkManager[867]: <info>  [1771528611.9937] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Feb 19 19:16:52 np0005624785.novalocal python3[7475]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-468b-dd2a-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:16:59 np0005624785.novalocal sudo[7553]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxglwdokiofbxanvhcbpxaqdenqqqxmu ; OS_CLOUD=vexxhost /usr/bin/python3'
Feb 19 19:16:59 np0005624785.novalocal sudo[7553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:16:59 np0005624785.novalocal python3[7555]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 19 19:16:59 np0005624785.novalocal sudo[7553]: pam_unix(sudo:session): session closed for user root
Feb 19 19:16:59 np0005624785.novalocal sudo[7626]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftjeojusstrndjtogywbyzjczlorynuf ; OS_CLOUD=vexxhost /usr/bin/python3'
Feb 19 19:16:59 np0005624785.novalocal sudo[7626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:17:00 np0005624785.novalocal python3[7628]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771528619.5273046-102-36692809224692/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=7a8e49e106e52aec65153e2a66b94e291eb3a4d5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:17:00 np0005624785.novalocal sudo[7626]: pam_unix(sudo:session): session closed for user root
Feb 19 19:17:00 np0005624785.novalocal sudo[7676]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dawhwlcwraflhukgoseknbaafgtgxdjx ; OS_CLOUD=vexxhost /usr/bin/python3'
Feb 19 19:17:00 np0005624785.novalocal sudo[7676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:17:00 np0005624785.novalocal python3[7678]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 19 19:17:00 np0005624785.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Feb 19 19:17:00 np0005624785.novalocal systemd[1]: Stopped Network Manager Wait Online.
Feb 19 19:17:00 np0005624785.novalocal systemd[1]: Stopping Network Manager Wait Online...
Feb 19 19:17:00 np0005624785.novalocal systemd[1]: Stopping Network Manager...
Feb 19 19:17:00 np0005624785.novalocal NetworkManager[867]: <info>  [1771528620.8912] caught SIGTERM, shutting down normally.
Feb 19 19:17:00 np0005624785.novalocal NetworkManager[867]: <info>  [1771528620.8921] dhcp4 (eth0): canceled DHCP transaction
Feb 19 19:17:00 np0005624785.novalocal NetworkManager[867]: <info>  [1771528620.8922] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 19 19:17:00 np0005624785.novalocal NetworkManager[867]: <info>  [1771528620.8922] dhcp4 (eth0): state changed no lease
Feb 19 19:17:00 np0005624785.novalocal NetworkManager[867]: <info>  [1771528620.8925] manager: NetworkManager state is now CONNECTING
Feb 19 19:17:00 np0005624785.novalocal NetworkManager[867]: <info>  [1771528620.9022] dhcp4 (eth1): canceled DHCP transaction
Feb 19 19:17:00 np0005624785.novalocal NetworkManager[867]: <info>  [1771528620.9022] dhcp4 (eth1): state changed no lease
Feb 19 19:17:00 np0005624785.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 19 19:17:00 np0005624785.novalocal NetworkManager[867]: <info>  [1771528620.9084] exiting (success)
Feb 19 19:17:00 np0005624785.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 19 19:17:00 np0005624785.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Feb 19 19:17:00 np0005624785.novalocal systemd[1]: Stopped Network Manager.
Feb 19 19:17:00 np0005624785.novalocal systemd[1]: Starting Network Manager...
Feb 19 19:17:00 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528620.9458] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:c101bf30-d7ef-4612-9fa1-9cb228425d0e)
Feb 19 19:17:00 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528620.9461] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Feb 19 19:17:00 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528620.9508] manager[0x5627ab3b0000]: monitoring kernel firmware directory '/lib/firmware'.
Feb 19 19:17:00 np0005624785.novalocal systemd[1]: Starting Hostname Service...
Feb 19 19:17:01 np0005624785.novalocal systemd[1]: Started Hostname Service.
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0184] hostname: hostname: using hostnamed
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0184] hostname: static hostname changed from (none) to "np0005624785.novalocal"
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0189] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0195] manager[0x5627ab3b0000]: rfkill: Wi-Fi hardware radio set enabled
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0196] manager[0x5627ab3b0000]: rfkill: WWAN hardware radio set enabled
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0227] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0227] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0228] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0228] manager: Networking is enabled by state file
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0231] settings: Loaded settings plugin: keyfile (internal)
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0235] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0264] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0274] dhcp: init: Using DHCP client 'internal'
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0277] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0282] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0287] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0294] device (lo): Activation: starting connection 'lo' (8f9ecf5e-4818-46e5-a2b3-372e6bc78723)
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0299] device (eth0): carrier: link connected
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0303] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0307] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0308] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0313] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0319] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0325] device (eth1): carrier: link connected
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0333] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0342] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (5ffa268c-a4ee-37a5-8060-0039bff52aa4) (indicated)
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0342] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0349] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0356] device (eth1): Activation: starting connection 'Wired connection 1' (5ffa268c-a4ee-37a5-8060-0039bff52aa4)
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0364] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Feb 19 19:17:01 np0005624785.novalocal systemd[1]: Started Network Manager.
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0369] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0371] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0373] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0375] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0383] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0385] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0387] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0388] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0393] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0395] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0400] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0401] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0428] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0431] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Feb 19 19:17:01 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528621.0434] device (lo): Activation: successful, device activated.
Feb 19 19:17:01 np0005624785.novalocal systemd[1]: Starting Network Manager Wait Online...
Feb 19 19:17:01 np0005624785.novalocal sudo[7676]: pam_unix(sudo:session): session closed for user root
Feb 19 19:17:01 np0005624785.novalocal python3[7743]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-468b-dd2a-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:17:02 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528622.1342] dhcp4 (eth0): state changed new lease, address=38.102.83.220
Feb 19 19:17:02 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528622.1349] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Feb 19 19:17:02 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528622.1416] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb 19 19:17:02 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528622.1463] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb 19 19:17:02 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528622.1464] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb 19 19:17:02 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528622.1466] manager: NetworkManager state is now CONNECTED_SITE
Feb 19 19:17:02 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528622.1468] device (eth0): Activation: successful, device activated.
Feb 19 19:17:02 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528622.1472] manager: NetworkManager state is now CONNECTED_GLOBAL
Feb 19 19:17:12 np0005624785.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 19 19:17:31 np0005624785.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 19 19:17:33 np0005624785.novalocal systemd[4801]: Starting Mark boot as successful...
Feb 19 19:17:33 np0005624785.novalocal systemd[4801]: Finished Mark boot as successful.
Feb 19 19:17:46 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528666.4796] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb 19 19:17:46 np0005624785.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 19 19:17:46 np0005624785.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 19 19:17:46 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528666.5067] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb 19 19:17:46 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528666.5069] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb 19 19:17:46 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528666.5078] device (eth1): Activation: successful, device activated.
Feb 19 19:17:46 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528666.5088] manager: startup complete
Feb 19 19:17:46 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528666.5090] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Feb 19 19:17:46 np0005624785.novalocal NetworkManager[7688]: <warn>  [1771528666.5098] device (eth1): Activation: failed for connection 'Wired connection 1'
Feb 19 19:17:46 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528666.5109] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Feb 19 19:17:46 np0005624785.novalocal systemd[1]: Finished Network Manager Wait Online.
Feb 19 19:17:46 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528666.5212] dhcp4 (eth1): canceled DHCP transaction
Feb 19 19:17:46 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528666.5212] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Feb 19 19:17:46 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528666.5212] dhcp4 (eth1): state changed no lease
Feb 19 19:17:46 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528666.5229] policy: auto-activating connection 'ci-private-network' (c06ca1cd-73d8-5811-b2c8-fc60202bb10e)
Feb 19 19:17:46 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528666.5236] device (eth1): Activation: starting connection 'ci-private-network' (c06ca1cd-73d8-5811-b2c8-fc60202bb10e)
Feb 19 19:17:46 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528666.5238] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 19 19:17:46 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528666.5242] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 19 19:17:46 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528666.5251] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 19 19:17:46 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528666.5262] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 19 19:17:46 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528666.5310] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 19 19:17:46 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528666.5313] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 19 19:17:46 np0005624785.novalocal NetworkManager[7688]: <info>  [1771528666.5319] device (eth1): Activation: successful, device activated.
Feb 19 19:17:56 np0005624785.novalocal sudo[7866]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajszvdtgssmsvthozjygokduujqojbhj ; OS_CLOUD=vexxhost /usr/bin/python3'
Feb 19 19:17:56 np0005624785.novalocal sudo[7866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:17:56 np0005624785.novalocal python3[7868]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 19 19:17:56 np0005624785.novalocal sudo[7866]: pam_unix(sudo:session): session closed for user root
Feb 19 19:17:56 np0005624785.novalocal sudo[7939]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lteuxpvlnuuagylksracrxniweetizzd ; OS_CLOUD=vexxhost /usr/bin/python3'
Feb 19 19:17:56 np0005624785.novalocal sudo[7939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:17:56 np0005624785.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 19 19:17:56 np0005624785.novalocal python3[7941]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771528676.0732262-259-141709989085602/source _original_basename=tmp2gndr3wa follow=False checksum=14bbc63098ef78a6ecf8a9cd4da38182f7663e3d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:17:56 np0005624785.novalocal sudo[7939]: pam_unix(sudo:session): session closed for user root
Feb 19 19:18:22 np0005624785.novalocal sshd[1015]: Timeout before authentication for connection from 14.103.114.234 to 38.102.83.220, pid = 7444
Feb 19 19:18:56 np0005624785.novalocal sshd-session[4811]: Received disconnect from 38.102.83.114 port 39380:11: disconnected by user
Feb 19 19:18:56 np0005624785.novalocal sshd-session[4811]: Disconnected from user zuul 38.102.83.114 port 39380
Feb 19 19:18:56 np0005624785.novalocal sshd-session[4797]: pam_unix(sshd:session): session closed for user zuul
Feb 19 19:18:56 np0005624785.novalocal systemd-logind[810]: Session 1 logged out. Waiting for processes to exit.
Feb 19 19:20:33 np0005624785.novalocal systemd[4801]: Created slice User Background Tasks Slice.
Feb 19 19:20:33 np0005624785.novalocal systemd[4801]: Starting Cleanup of User's Temporary Files and Directories...
Feb 19 19:20:33 np0005624785.novalocal systemd[4801]: Finished Cleanup of User's Temporary Files and Directories.
Feb 19 19:25:06 np0005624785.novalocal sshd-session[7971]: Accepted publickey for zuul from 38.102.83.114 port 41012 ssh2: RSA SHA256:Tz8+J60H2NvCUNbrBLaXS+pTxQ8qAPOs7gJ/OpaGYjQ
Feb 19 19:25:06 np0005624785.novalocal systemd-logind[810]: New session 3 of user zuul.
Feb 19 19:25:07 np0005624785.novalocal systemd[1]: Started Session 3 of User zuul.
Feb 19 19:25:07 np0005624785.novalocal sshd-session[7971]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 19 19:25:07 np0005624785.novalocal sudo[7998]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgjlcxusvhpiwusytwmsfzasxcvojeov ; /usr/bin/python3'
Feb 19 19:25:07 np0005624785.novalocal sudo[7998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:25:07 np0005624785.novalocal python3[8000]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-1fce-262f-000000002187-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:25:07 np0005624785.novalocal sudo[7998]: pam_unix(sudo:session): session closed for user root
Feb 19 19:25:07 np0005624785.novalocal sudo[8027]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxknhqrxpqwmapajsqxkjgkkukkshzxm ; /usr/bin/python3'
Feb 19 19:25:07 np0005624785.novalocal sudo[8027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:25:07 np0005624785.novalocal python3[8029]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:25:07 np0005624785.novalocal sudo[8027]: pam_unix(sudo:session): session closed for user root
Feb 19 19:25:07 np0005624785.novalocal sudo[8053]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyhuwjjinouezhkuiflbdqriuzmckbvh ; /usr/bin/python3'
Feb 19 19:25:07 np0005624785.novalocal sudo[8053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:25:07 np0005624785.novalocal python3[8055]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:25:07 np0005624785.novalocal sudo[8053]: pam_unix(sudo:session): session closed for user root
Feb 19 19:25:07 np0005624785.novalocal sudo[8079]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsonxlniafozjydxtxbzouovktejanxv ; /usr/bin/python3'
Feb 19 19:25:07 np0005624785.novalocal sudo[8079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:25:08 np0005624785.novalocal python3[8081]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:25:08 np0005624785.novalocal sudo[8079]: pam_unix(sudo:session): session closed for user root
Feb 19 19:25:08 np0005624785.novalocal sudo[8105]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxlowgfjvdukrhsrtlrhggxgkyrgfric ; /usr/bin/python3'
Feb 19 19:25:08 np0005624785.novalocal sudo[8105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:25:08 np0005624785.novalocal python3[8107]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:25:08 np0005624785.novalocal sudo[8105]: pam_unix(sudo:session): session closed for user root
Feb 19 19:25:08 np0005624785.novalocal sudo[8131]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvjajmlkvdztfsowffzlhdgmilzxocgm ; /usr/bin/python3'
Feb 19 19:25:08 np0005624785.novalocal sudo[8131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:25:08 np0005624785.novalocal python3[8133]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:25:08 np0005624785.novalocal sudo[8131]: pam_unix(sudo:session): session closed for user root
Feb 19 19:25:09 np0005624785.novalocal sudo[8209]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiffugilpxdaisfpbmgragtekiqdnssl ; /usr/bin/python3'
Feb 19 19:25:09 np0005624785.novalocal sudo[8209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:25:09 np0005624785.novalocal python3[8211]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 19 19:25:09 np0005624785.novalocal sudo[8209]: pam_unix(sudo:session): session closed for user root
Feb 19 19:25:09 np0005624785.novalocal sudo[8282]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duicvhizsxijklukmvcqteqyqbpemcwi ; /usr/bin/python3'
Feb 19 19:25:09 np0005624785.novalocal sudo[8282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:25:09 np0005624785.novalocal python3[8284]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771529109.184832-514-166447552317273/source _original_basename=tmpnc0x1uxf follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:25:09 np0005624785.novalocal sudo[8282]: pam_unix(sudo:session): session closed for user root
Feb 19 19:25:10 np0005624785.novalocal sudo[8332]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izgmzigcxcdknswjhxeyhshibiylprir ; /usr/bin/python3'
Feb 19 19:25:10 np0005624785.novalocal sudo[8332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:25:10 np0005624785.novalocal python3[8334]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 19 19:25:10 np0005624785.novalocal systemd[1]: Reloading.
Feb 19 19:25:10 np0005624785.novalocal systemd-rc-local-generator[8352]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 19:25:10 np0005624785.novalocal sudo[8332]: pam_unix(sudo:session): session closed for user root
Feb 19 19:25:12 np0005624785.novalocal sudo[8395]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvtzmdlavjelvjhdfacwbvdufpcdkbdl ; /usr/bin/python3'
Feb 19 19:25:12 np0005624785.novalocal sudo[8395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:25:12 np0005624785.novalocal python3[8397]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Feb 19 19:25:12 np0005624785.novalocal sudo[8395]: pam_unix(sudo:session): session closed for user root
Feb 19 19:25:12 np0005624785.novalocal sudo[8421]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfkidwxfznmmsrbnojhlxqugdercgure ; /usr/bin/python3'
Feb 19 19:25:12 np0005624785.novalocal sudo[8421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:25:13 np0005624785.novalocal python3[8423]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:25:13 np0005624785.novalocal sudo[8421]: pam_unix(sudo:session): session closed for user root
Feb 19 19:25:13 np0005624785.novalocal sudo[8449]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlofdzygrgnxjlykukkfawhunjideelw ; /usr/bin/python3'
Feb 19 19:25:13 np0005624785.novalocal sudo[8449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:25:13 np0005624785.novalocal python3[8451]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:25:13 np0005624785.novalocal sudo[8449]: pam_unix(sudo:session): session closed for user root
Feb 19 19:25:13 np0005624785.novalocal sudo[8477]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dseenoxvcqztlorymcitpdoafolofhtx ; /usr/bin/python3'
Feb 19 19:25:13 np0005624785.novalocal sudo[8477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:25:13 np0005624785.novalocal python3[8479]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:25:13 np0005624785.novalocal sudo[8477]: pam_unix(sudo:session): session closed for user root
Feb 19 19:25:13 np0005624785.novalocal sudo[8505]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjcepncuhbnnbpozvfvjiukcodfszuht ; /usr/bin/python3'
Feb 19 19:25:13 np0005624785.novalocal sudo[8505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:25:13 np0005624785.novalocal python3[8507]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:25:13 np0005624785.novalocal sudo[8505]: pam_unix(sudo:session): session closed for user root
Feb 19 19:25:14 np0005624785.novalocal python3[8534]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-1fce-262f-00000000218e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:25:15 np0005624785.novalocal python3[8564]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 19 19:25:17 np0005624785.novalocal sshd-session[7974]: Connection closed by 38.102.83.114 port 41012
Feb 19 19:25:17 np0005624785.novalocal sshd-session[7971]: pam_unix(sshd:session): session closed for user zuul
Feb 19 19:25:17 np0005624785.novalocal systemd-logind[810]: Session 3 logged out. Waiting for processes to exit.
Feb 19 19:25:17 np0005624785.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Feb 19 19:25:17 np0005624785.novalocal systemd[1]: session-3.scope: Consumed 3.819s CPU time.
Feb 19 19:25:17 np0005624785.novalocal systemd-logind[810]: Removed session 3.
Feb 19 19:25:18 np0005624785.novalocal sshd-session[8572]: Accepted publickey for zuul from 38.102.83.114 port 42086 ssh2: RSA SHA256:Tz8+J60H2NvCUNbrBLaXS+pTxQ8qAPOs7gJ/OpaGYjQ
Feb 19 19:25:18 np0005624785.novalocal systemd-logind[810]: New session 4 of user zuul.
Feb 19 19:25:18 np0005624785.novalocal systemd[1]: Started Session 4 of User zuul.
Feb 19 19:25:18 np0005624785.novalocal sshd-session[8572]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 19 19:25:18 np0005624785.novalocal sudo[8599]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpbgxpwgknfpgnkeandtqoeixmpriqit ; /usr/bin/python3'
Feb 19 19:25:18 np0005624785.novalocal sudo[8599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:25:19 np0005624785.novalocal python3[8601]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 19 19:25:24 np0005624785.novalocal sshd-session[8608]: Invalid user oracle from 158.174.210.161 port 34626
Feb 19 19:25:24 np0005624785.novalocal sshd-session[8608]: Received disconnect from 158.174.210.161 port 34626:11: Bye Bye [preauth]
Feb 19 19:25:24 np0005624785.novalocal sshd-session[8608]: Disconnected from invalid user oracle 158.174.210.161 port 34626 [preauth]
Feb 19 19:25:27 np0005624785.novalocal setsebool[8638]: The virt_use_nfs policy boolean was changed to 1 by root
Feb 19 19:25:27 np0005624785.novalocal setsebool[8638]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Feb 19 19:25:34 np0005624785.novalocal irqbalance[806]: Cannot change IRQ 26 affinity: Operation not permitted
Feb 19 19:25:34 np0005624785.novalocal irqbalance[806]: IRQ 26 affinity is now unmanaged
Feb 19 19:25:37 np0005624785.novalocal kernel: SELinux:  Converting 385 SID table entries...
Feb 19 19:25:37 np0005624785.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Feb 19 19:25:37 np0005624785.novalocal kernel: SELinux:  policy capability open_perms=1
Feb 19 19:25:37 np0005624785.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Feb 19 19:25:37 np0005624785.novalocal kernel: SELinux:  policy capability always_check_network=0
Feb 19 19:25:37 np0005624785.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 19 19:25:37 np0005624785.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 19 19:25:37 np0005624785.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 19 19:25:47 np0005624785.novalocal kernel: SELinux:  Converting 388 SID table entries...
Feb 19 19:25:47 np0005624785.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Feb 19 19:25:47 np0005624785.novalocal kernel: SELinux:  policy capability open_perms=1
Feb 19 19:25:47 np0005624785.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Feb 19 19:25:47 np0005624785.novalocal kernel: SELinux:  policy capability always_check_network=0
Feb 19 19:25:47 np0005624785.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 19 19:25:47 np0005624785.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 19 19:25:47 np0005624785.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 19 19:26:06 np0005624785.novalocal dbus-broker-launch[789]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Feb 19 19:26:06 np0005624785.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 19 19:26:06 np0005624785.novalocal systemd[1]: Starting man-db-cache-update.service...
Feb 19 19:26:06 np0005624785.novalocal systemd[1]: Reloading.
Feb 19 19:26:06 np0005624785.novalocal systemd-rc-local-generator[9435]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 19:26:06 np0005624785.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Feb 19 19:26:07 np0005624785.novalocal sudo[8599]: pam_unix(sudo:session): session closed for user root
Feb 19 19:26:10 np0005624785.novalocal python3[13541]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163ef9-e89a-1c35-8dcd-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:26:10 np0005624785.novalocal kernel: evm: overlay not supported
Feb 19 19:26:10 np0005624785.novalocal systemd[4801]: Starting D-Bus User Message Bus...
Feb 19 19:26:10 np0005624785.novalocal dbus-broker-launch[14493]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Feb 19 19:26:10 np0005624785.novalocal dbus-broker-launch[14493]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Feb 19 19:26:10 np0005624785.novalocal systemd[4801]: Started D-Bus User Message Bus.
Feb 19 19:26:10 np0005624785.novalocal dbus-broker-lau[14493]: Ready
Feb 19 19:26:10 np0005624785.novalocal systemd[4801]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Feb 19 19:26:10 np0005624785.novalocal systemd[4801]: Created slice Slice /user.
Feb 19 19:26:10 np0005624785.novalocal systemd[4801]: podman-14467.scope: unit configures an IP firewall, but not running as root.
Feb 19 19:26:10 np0005624785.novalocal systemd[4801]: (This warning is only shown for the first unit using IP firewalling.)
Feb 19 19:26:10 np0005624785.novalocal systemd[4801]: Started podman-14467.scope.
Feb 19 19:26:11 np0005624785.novalocal systemd[4801]: Started podman-pause-3337ef56.scope.
Feb 19 19:26:11 np0005624785.novalocal sudo[14609]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhvroxbwkdqxmtlmdfyhjjpdisdvsauf ; /usr/bin/python3'
Feb 19 19:26:11 np0005624785.novalocal sudo[14609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:26:11 np0005624785.novalocal python3[14611]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.64:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.64:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:26:11 np0005624785.novalocal python3[14611]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Feb 19 19:26:11 np0005624785.novalocal sudo[14609]: pam_unix(sudo:session): session closed for user root
Feb 19 19:26:12 np0005624785.novalocal sshd-session[8575]: Connection closed by 38.102.83.114 port 42086
Feb 19 19:26:12 np0005624785.novalocal sshd-session[8572]: pam_unix(sshd:session): session closed for user zuul
Feb 19 19:26:12 np0005624785.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Feb 19 19:26:12 np0005624785.novalocal systemd[1]: session-4.scope: Consumed 40.775s CPU time.
Feb 19 19:26:12 np0005624785.novalocal systemd-logind[810]: Session 4 logged out. Waiting for processes to exit.
Feb 19 19:26:12 np0005624785.novalocal systemd-logind[810]: Removed session 4.
Feb 19 19:26:30 np0005624785.novalocal sshd-session[25235]: Unable to negotiate with 38.102.83.176 port 34512: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Feb 19 19:26:30 np0005624785.novalocal sshd-session[25238]: Connection closed by 38.102.83.176 port 34490 [preauth]
Feb 19 19:26:30 np0005624785.novalocal sshd-session[25236]: Connection closed by 38.102.83.176 port 34486 [preauth]
Feb 19 19:26:30 np0005624785.novalocal sshd-session[25239]: Unable to negotiate with 38.102.83.176 port 34500: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Feb 19 19:26:30 np0005624785.novalocal sshd-session[25242]: Unable to negotiate with 38.102.83.176 port 34526: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Feb 19 19:26:33 np0005624785.novalocal sshd-session[26945]: Accepted publickey for zuul from 38.102.83.114 port 40424 ssh2: RSA SHA256:Tz8+J60H2NvCUNbrBLaXS+pTxQ8qAPOs7gJ/OpaGYjQ
Feb 19 19:26:34 np0005624785.novalocal systemd-logind[810]: New session 5 of user zuul.
Feb 19 19:26:34 np0005624785.novalocal systemd[1]: Started Session 5 of User zuul.
Feb 19 19:26:34 np0005624785.novalocal sshd-session[26945]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 19 19:26:34 np0005624785.novalocal python3[27008]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDx2kF6EPXS0pKTvDxek/VwMNsI5uwx6j0fiipcVHXVHu97epmLYecqWoLnfTz9CDIu9d0Hh++MLxxc1BtD/ncw= zuul@np0005624784.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 19 19:26:34 np0005624785.novalocal sudo[27157]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhgwsgsxfpmpoquzelmsrturxyknario ; /usr/bin/python3'
Feb 19 19:26:34 np0005624785.novalocal sudo[27157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:26:34 np0005624785.novalocal python3[27167]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDx2kF6EPXS0pKTvDxek/VwMNsI5uwx6j0fiipcVHXVHu97epmLYecqWoLnfTz9CDIu9d0Hh++MLxxc1BtD/ncw= zuul@np0005624784.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 19 19:26:34 np0005624785.novalocal sudo[27157]: pam_unix(sudo:session): session closed for user root
Feb 19 19:26:35 np0005624785.novalocal sudo[27584]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rntserdrhmccevxwooxmsgdybnfdkump ; /usr/bin/python3'
Feb 19 19:26:35 np0005624785.novalocal sudo[27584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:26:35 np0005624785.novalocal python3[27597]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005624785.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Feb 19 19:26:35 np0005624785.novalocal useradd[27685]: new group: name=cloud-admin, GID=1002
Feb 19 19:26:35 np0005624785.novalocal useradd[27685]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Feb 19 19:26:35 np0005624785.novalocal sudo[27584]: pam_unix(sudo:session): session closed for user root
Feb 19 19:26:35 np0005624785.novalocal sudo[27819]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lusnhordygymvafyrlroisotxtwjaqtn ; /usr/bin/python3'
Feb 19 19:26:35 np0005624785.novalocal sudo[27819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:26:36 np0005624785.novalocal python3[27831]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDx2kF6EPXS0pKTvDxek/VwMNsI5uwx6j0fiipcVHXVHu97epmLYecqWoLnfTz9CDIu9d0Hh++MLxxc1BtD/ncw= zuul@np0005624784.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 19 19:26:36 np0005624785.novalocal sudo[27819]: pam_unix(sudo:session): session closed for user root
Feb 19 19:26:36 np0005624785.novalocal sudo[28103]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpcwgzrmbsozqscfnnnexqydmhdeaavu ; /usr/bin/python3'
Feb 19 19:26:36 np0005624785.novalocal sudo[28103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:26:36 np0005624785.novalocal python3[28110]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 19 19:26:36 np0005624785.novalocal sudo[28103]: pam_unix(sudo:session): session closed for user root
Feb 19 19:26:36 np0005624785.novalocal sudo[28448]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cclgpimrbscrjvfvspnwagnqoxkfcngi ; /usr/bin/python3'
Feb 19 19:26:36 np0005624785.novalocal sudo[28448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:26:37 np0005624785.novalocal python3[28458]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1771529196.3003323-135-137401386537541/source _original_basename=tmpzo8j2ay6 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:26:37 np0005624785.novalocal sudo[28448]: pam_unix(sudo:session): session closed for user root
Feb 19 19:26:37 np0005624785.novalocal sudo[28833]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkbgnpbykhjdlgrchioelwqnbyzbmpsr ; /usr/bin/python3'
Feb 19 19:26:37 np0005624785.novalocal sudo[28833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:26:37 np0005624785.novalocal python3[28842]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Feb 19 19:26:37 np0005624785.novalocal systemd[1]: Starting Hostname Service...
Feb 19 19:26:37 np0005624785.novalocal systemd[1]: Started Hostname Service.
Feb 19 19:26:37 np0005624785.novalocal systemd-hostnamed[28966]: Changed pretty hostname to 'compute-0'
Feb 19 19:26:37 compute-0 systemd-hostnamed[28966]: Hostname set to <compute-0> (static)
Feb 19 19:26:37 compute-0 NetworkManager[7688]: <info>  [1771529197.9193] hostname: static hostname changed from "np0005624785.novalocal" to "compute-0"
Feb 19 19:26:37 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 19 19:26:37 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 19 19:26:37 compute-0 sudo[28833]: pam_unix(sudo:session): session closed for user root
Feb 19 19:26:38 compute-0 sshd-session[26956]: Connection closed by 38.102.83.114 port 40424
Feb 19 19:26:38 compute-0 sshd-session[26945]: pam_unix(sshd:session): session closed for user zuul
Feb 19 19:26:38 compute-0 systemd[1]: session-5.scope: Deactivated successfully.
Feb 19 19:26:38 compute-0 systemd[1]: session-5.scope: Consumed 2.099s CPU time.
Feb 19 19:26:38 compute-0 systemd-logind[810]: Session 5 logged out. Waiting for processes to exit.
Feb 19 19:26:38 compute-0 systemd-logind[810]: Removed session 5.
Feb 19 19:26:41 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 19 19:26:41 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 19 19:26:41 compute-0 systemd[1]: man-db-cache-update.service: Consumed 40.010s CPU time.
Feb 19 19:26:41 compute-0 systemd[1]: run-rc2409d2c5c244ef59f66934397ab1a20.service: Deactivated successfully.
Feb 19 19:26:47 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 19 19:27:07 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 19 19:27:41 compute-0 sshd-session[30495]: Invalid user oracle from 103.213.238.91 port 44888
Feb 19 19:27:41 compute-0 sshd-session[30495]: Received disconnect from 103.213.238.91 port 44888:11: Bye Bye [preauth]
Feb 19 19:27:41 compute-0 sshd-session[30495]: Disconnected from invalid user oracle 103.213.238.91 port 44888 [preauth]
Feb 19 19:30:33 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Feb 19 19:30:33 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Feb 19 19:30:33 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Feb 19 19:30:33 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Feb 19 19:31:06 compute-0 sshd-session[30502]: Accepted publickey for zuul from 38.102.83.176 port 38856 ssh2: RSA SHA256:Tz8+J60H2NvCUNbrBLaXS+pTxQ8qAPOs7gJ/OpaGYjQ
Feb 19 19:31:06 compute-0 systemd-logind[810]: New session 6 of user zuul.
Feb 19 19:31:06 compute-0 systemd[1]: Started Session 6 of User zuul.
Feb 19 19:31:06 compute-0 sshd-session[30502]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 19 19:31:06 compute-0 python3[30578]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 19 19:31:08 compute-0 sudo[30692]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvadzkoqqbhzwoqqywllhfprudcvvcge ; /usr/bin/python3'
Feb 19 19:31:08 compute-0 sudo[30692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:31:08 compute-0 python3[30694]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 19 19:31:08 compute-0 sudo[30692]: pam_unix(sudo:session): session closed for user root
Feb 19 19:31:08 compute-0 sudo[30765]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brvoujedpwjuxgpcamgpixaruebxjaap ; /usr/bin/python3'
Feb 19 19:31:08 compute-0 sudo[30765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:31:08 compute-0 python3[30767]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1771529468.0594962-34212-103010444086013/source mode=0755 _original_basename=delorean.repo follow=False checksum=cc4ab4695da8ec58c451521a3dd2f41014af145d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:31:08 compute-0 sudo[30765]: pam_unix(sudo:session): session closed for user root
Feb 19 19:31:08 compute-0 sudo[30791]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zissvgqxjchgfehmwudvmcoragkqgfsx ; /usr/bin/python3'
Feb 19 19:31:08 compute-0 sudo[30791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:31:08 compute-0 python3[30793]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 19 19:31:09 compute-0 sudo[30791]: pam_unix(sudo:session): session closed for user root
Feb 19 19:31:09 compute-0 sudo[30864]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skxqiaioyhxwxqekxvzglyynnkwfvvuq ; /usr/bin/python3'
Feb 19 19:31:09 compute-0 sudo[30864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:31:09 compute-0 python3[30866]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1771529468.0594962-34212-103010444086013/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:31:09 compute-0 sudo[30864]: pam_unix(sudo:session): session closed for user root
Feb 19 19:31:09 compute-0 sudo[30890]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epumhljijdcsettrzojfujhjrrwkejnw ; /usr/bin/python3'
Feb 19 19:31:09 compute-0 sudo[30890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:31:09 compute-0 python3[30892]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 19 19:31:09 compute-0 sudo[30890]: pam_unix(sudo:session): session closed for user root
Feb 19 19:31:09 compute-0 sudo[30963]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sazzinthqndyktpfmlgzmaekjjfnwjuq ; /usr/bin/python3'
Feb 19 19:31:09 compute-0 sudo[30963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:31:09 compute-0 python3[30965]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1771529468.0594962-34212-103010444086013/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:31:09 compute-0 sudo[30963]: pam_unix(sudo:session): session closed for user root
Feb 19 19:31:09 compute-0 sudo[30989]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmsliabptralbrlcuzcfjerpfereyaos ; /usr/bin/python3'
Feb 19 19:31:09 compute-0 sudo[30989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:31:10 compute-0 python3[30991]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 19 19:31:10 compute-0 sudo[30989]: pam_unix(sudo:session): session closed for user root
Feb 19 19:31:10 compute-0 sudo[31062]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjmazptazwhkyfhaipucphdalpdwdlwi ; /usr/bin/python3'
Feb 19 19:31:10 compute-0 sudo[31062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:31:10 compute-0 python3[31064]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1771529468.0594962-34212-103010444086013/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:31:10 compute-0 sudo[31062]: pam_unix(sudo:session): session closed for user root
Feb 19 19:31:10 compute-0 sudo[31088]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgstekvpqalttlnrfkhmpckhretqdfuy ; /usr/bin/python3'
Feb 19 19:31:10 compute-0 sudo[31088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:31:10 compute-0 python3[31090]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 19 19:31:10 compute-0 sudo[31088]: pam_unix(sudo:session): session closed for user root
Feb 19 19:31:10 compute-0 sudo[31161]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgswxefalemqztphktziefahlewgkejx ; /usr/bin/python3'
Feb 19 19:31:10 compute-0 sudo[31161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:31:10 compute-0 python3[31163]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1771529468.0594962-34212-103010444086013/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:31:10 compute-0 sudo[31161]: pam_unix(sudo:session): session closed for user root
Feb 19 19:31:11 compute-0 sudo[31187]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmvytqamduchncuinuawptjrbpelxsti ; /usr/bin/python3'
Feb 19 19:31:11 compute-0 sudo[31187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:31:11 compute-0 python3[31189]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 19 19:31:11 compute-0 sudo[31187]: pam_unix(sudo:session): session closed for user root
Feb 19 19:31:11 compute-0 sudo[31260]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwjklpuekavkfpnglooocfzjrvdwndaf ; /usr/bin/python3'
Feb 19 19:31:11 compute-0 sudo[31260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:31:11 compute-0 python3[31262]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1771529468.0594962-34212-103010444086013/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:31:11 compute-0 sudo[31260]: pam_unix(sudo:session): session closed for user root
Feb 19 19:31:11 compute-0 sudo[31286]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deeonhwqkuafjxbffddwlabifkqsxaim ; /usr/bin/python3'
Feb 19 19:31:11 compute-0 sudo[31286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:31:11 compute-0 python3[31288]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 19 19:31:11 compute-0 sudo[31286]: pam_unix(sudo:session): session closed for user root
Feb 19 19:31:11 compute-0 sudo[31359]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuowpmterhgndmmhzzjwgazmtebvnyqi ; /usr/bin/python3'
Feb 19 19:31:11 compute-0 sudo[31359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:31:12 compute-0 python3[31361]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1771529468.0594962-34212-103010444086013/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=362a603578148d54e8cd25942b88d7f471cc677a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:31:12 compute-0 sudo[31359]: pam_unix(sudo:session): session closed for user root
Feb 19 19:31:14 compute-0 sshd-session[31387]: Connection closed by 192.168.122.11 port 51206 [preauth]
Feb 19 19:31:14 compute-0 sshd-session[31386]: Connection closed by 192.168.122.11 port 51190 [preauth]
Feb 19 19:31:14 compute-0 sshd-session[31388]: Unable to negotiate with 192.168.122.11 port 51222: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Feb 19 19:31:14 compute-0 sshd-session[31389]: Unable to negotiate with 192.168.122.11 port 51236: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Feb 19 19:31:14 compute-0 sshd-session[31390]: Unable to negotiate with 192.168.122.11 port 51252: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Feb 19 19:31:24 compute-0 sshd-session[31397]: Invalid user administrator from 103.154.77.48 port 36442
Feb 19 19:31:24 compute-0 sshd-session[31397]: Received disconnect from 103.154.77.48 port 36442:11: Bye Bye [preauth]
Feb 19 19:31:24 compute-0 sshd-session[31397]: Disconnected from invalid user administrator 103.154.77.48 port 36442 [preauth]
Feb 19 19:32:33 compute-0 systemd[1]: Starting dnf makecache...
Feb 19 19:32:33 compute-0 dnf[31400]: Failed determining last makecache time.
Feb 19 19:32:33 compute-0 dnf[31400]: delorean-openstack-barbican-42b4c41831408a8e323 357 kB/s |  13 kB     00:00
Feb 19 19:32:33 compute-0 dnf[31400]: delorean-python-glean-642fffe0203a8ffcc2443db52 3.0 MB/s |  65 kB     00:00
Feb 19 19:32:33 compute-0 dnf[31400]: delorean-openstack-cinder-1c00d6490d88e436f26ef 1.4 MB/s |  32 kB     00:00
Feb 19 19:32:33 compute-0 dnf[31400]: delorean-python-stevedore-c4acc5639fd2329372142 6.1 MB/s | 131 kB     00:00
Feb 19 19:32:33 compute-0 dnf[31400]: delorean-python-cloudkitty-tests-tempest-783703 1.5 MB/s |  32 kB     00:00
Feb 19 19:32:33 compute-0 dnf[31400]: delorean-diskimage-builder-61b717cc45660834fe9a  12 MB/s | 349 kB     00:00
Feb 19 19:32:33 compute-0 dnf[31400]: delorean-openstack-nova-eaa65f0b85123a4ee343246 1.8 MB/s |  42 kB     00:00
Feb 19 19:32:33 compute-0 dnf[31400]: delorean-python-designate-tests-tempest-347fdbc 806 kB/s |  18 kB     00:00
Feb 19 19:32:33 compute-0 dnf[31400]: delorean-openstack-glance-1fd12c29b339f30fe823e 856 kB/s |  18 kB     00:00
Feb 19 19:32:33 compute-0 dnf[31400]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 1.5 MB/s |  29 kB     00:00
Feb 19 19:32:33 compute-0 dnf[31400]: delorean-openstack-manila-d783d10e75495b73866db 1.1 MB/s |  25 kB     00:00
Feb 19 19:32:33 compute-0 dnf[31400]: delorean-openstack-neutron-95cadbd379667c8520c8 6.7 MB/s | 154 kB     00:00
Feb 19 19:32:34 compute-0 dnf[31400]: delorean-openstack-octavia-5975097dd4b021385178 1.2 MB/s |  26 kB     00:00
Feb 19 19:32:34 compute-0 dnf[31400]: delorean-openstack-watcher-c014f81a8647287f6dcc 938 kB/s |  16 kB     00:00
Feb 19 19:32:34 compute-0 dnf[31400]: delorean-python-tcib-78032d201b02cee27e8e644c61 319 kB/s | 7.4 kB     00:00
Feb 19 19:32:34 compute-0 dnf[31400]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 6.1 MB/s | 144 kB     00:00
Feb 19 19:32:34 compute-0 dnf[31400]: delorean-openstack-swift-dc98a8463506ac520c469a 655 kB/s |  14 kB     00:00
Feb 19 19:32:34 compute-0 dnf[31400]: delorean-python-tempestconf-8515371b7cceebd4282 2.3 MB/s |  53 kB     00:00
Feb 19 19:32:34 compute-0 dnf[31400]: delorean-openstack-heat-ui-013accbfd179753bc3f0 3.9 MB/s |  96 kB     00:00
Feb 19 19:32:34 compute-0 dnf[31400]: CentOS Stream 9 - BaseOS                         56 kB/s | 7.0 kB     00:00
Feb 19 19:32:34 compute-0 dnf[31400]: CentOS Stream 9 - AppStream                      63 kB/s | 7.1 kB     00:00
Feb 19 19:32:34 compute-0 dnf[31400]: CentOS Stream 9 - CRB                            58 kB/s | 6.9 kB     00:00
Feb 19 19:32:35 compute-0 dnf[31400]: CentOS Stream 9 - Extras packages                56 kB/s | 7.6 kB     00:00
Feb 19 19:32:35 compute-0 dnf[31400]: dlrn-antelope-testing                           4.6 MB/s | 1.1 MB     00:00
Feb 19 19:32:35 compute-0 dnf[31400]: dlrn-antelope-build-deps                         15 MB/s | 461 kB     00:00
Feb 19 19:32:35 compute-0 dnf[31400]: centos9-rabbitmq                                8.1 MB/s | 123 kB     00:00
Feb 19 19:32:35 compute-0 dnf[31400]: centos9-storage                                  24 MB/s | 415 kB     00:00
Feb 19 19:32:36 compute-0 dnf[31400]: centos9-opstools                                4.1 MB/s |  51 kB     00:00
Feb 19 19:32:36 compute-0 dnf[31400]: NFV SIG OpenvSwitch                              26 MB/s | 465 kB     00:00
Feb 19 19:32:36 compute-0 dnf[31400]: repo-setup-centos-appstream                     118 MB/s |  27 MB     00:00
Feb 19 19:32:42 compute-0 dnf[31400]: repo-setup-centos-baseos                         63 MB/s | 8.9 MB     00:00
Feb 19 19:32:43 compute-0 dnf[31400]: repo-setup-centos-highavailability               33 MB/s | 744 kB     00:00
Feb 19 19:32:44 compute-0 dnf[31400]: repo-setup-centos-powertools                     96 MB/s | 7.8 MB     00:00
Feb 19 19:32:46 compute-0 dnf[31400]: Extra Packages for Enterprise Linux 9 - x86_64   18 MB/s |  20 MB     00:01
Feb 19 19:32:59 compute-0 dnf[31400]: Metadata cache created.
Feb 19 19:32:59 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Feb 19 19:32:59 compute-0 systemd[1]: Finished dnf makecache.
Feb 19 19:32:59 compute-0 systemd[1]: dnf-makecache.service: Consumed 23.675s CPU time.
Feb 19 19:33:58 compute-0 python3[31525]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:34:32 compute-0 sshd-session[31527]: Invalid user test1 from 158.174.210.161 port 16082
Feb 19 19:34:32 compute-0 sshd-session[31527]: Received disconnect from 158.174.210.161 port 16082:11: Bye Bye [preauth]
Feb 19 19:34:32 compute-0 sshd-session[31527]: Disconnected from invalid user test1 158.174.210.161 port 16082 [preauth]
Feb 19 19:35:21 compute-0 sshd-session[31529]: Invalid user mcserver from 103.213.238.91 port 60892
Feb 19 19:35:22 compute-0 sshd-session[31529]: Received disconnect from 103.213.238.91 port 60892:11: Bye Bye [preauth]
Feb 19 19:35:22 compute-0 sshd-session[31529]: Disconnected from invalid user mcserver 103.213.238.91 port 60892 [preauth]
Feb 19 19:38:28 compute-0 sshd-session[31533]: Invalid user rstudio from 158.174.210.161 port 1560
Feb 19 19:38:28 compute-0 sshd-session[31533]: Received disconnect from 158.174.210.161 port 1560:11: Bye Bye [preauth]
Feb 19 19:38:28 compute-0 sshd-session[31533]: Disconnected from invalid user rstudio 158.174.210.161 port 1560 [preauth]
Feb 19 19:38:42 compute-0 sshd-session[31535]: Invalid user administrator from 103.154.77.48 port 48572
Feb 19 19:38:42 compute-0 sshd-session[31535]: Received disconnect from 103.154.77.48 port 48572:11: Bye Bye [preauth]
Feb 19 19:38:42 compute-0 sshd-session[31535]: Disconnected from invalid user administrator 103.154.77.48 port 48572 [preauth]
Feb 19 19:38:57 compute-0 sshd-session[30505]: Received disconnect from 38.102.83.176 port 38856:11: disconnected by user
Feb 19 19:38:57 compute-0 sshd-session[30505]: Disconnected from user zuul 38.102.83.176 port 38856
Feb 19 19:38:57 compute-0 sshd-session[30502]: pam_unix(sshd:session): session closed for user zuul
Feb 19 19:38:57 compute-0 systemd-logind[810]: Session 6 logged out. Waiting for processes to exit.
Feb 19 19:38:57 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Feb 19 19:38:57 compute-0 systemd[1]: session-6.scope: Consumed 4.615s CPU time.
Feb 19 19:38:57 compute-0 systemd-logind[810]: Removed session 6.
Feb 19 19:39:11 compute-0 sshd-session[31537]: Received disconnect from 103.213.238.91 port 38656:11: Bye Bye [preauth]
Feb 19 19:39:11 compute-0 sshd-session[31537]: Disconnected from authenticating user root 103.213.238.91 port 38656 [preauth]
Feb 19 19:39:39 compute-0 sshd-session[31539]: Received disconnect from 125.31.2.160 port 60344:11: Bye Bye [preauth]
Feb 19 19:39:39 compute-0 sshd-session[31539]: Disconnected from authenticating user root 125.31.2.160 port 60344 [preauth]
Feb 19 19:41:51 compute-0 sshd-session[31541]: Invalid user n8n from 103.154.77.48 port 47476
Feb 19 19:41:52 compute-0 sshd-session[31541]: Received disconnect from 103.154.77.48 port 47476:11: Bye Bye [preauth]
Feb 19 19:41:52 compute-0 sshd-session[31541]: Disconnected from invalid user n8n 103.154.77.48 port 47476 [preauth]
Feb 19 19:42:48 compute-0 sshd-session[31544]: Received disconnect from 158.174.210.161 port 49884:11: Bye Bye [preauth]
Feb 19 19:42:48 compute-0 sshd-session[31544]: Disconnected from authenticating user root 158.174.210.161 port 49884 [preauth]
Feb 19 19:42:51 compute-0 sshd-session[31546]: Invalid user n8n from 103.213.238.91 port 44648
Feb 19 19:42:52 compute-0 sshd-session[31546]: Received disconnect from 103.213.238.91 port 44648:11: Bye Bye [preauth]
Feb 19 19:42:52 compute-0 sshd-session[31546]: Disconnected from invalid user n8n 103.213.238.91 port 44648 [preauth]
Feb 19 19:45:00 compute-0 sshd-session[31550]: Invalid user nutanix from 103.154.77.48 port 46380
Feb 19 19:45:01 compute-0 sshd-session[31550]: Received disconnect from 103.154.77.48 port 46380:11: Bye Bye [preauth]
Feb 19 19:45:01 compute-0 sshd-session[31550]: Disconnected from invalid user nutanix 103.154.77.48 port 46380 [preauth]
Feb 19 19:45:24 compute-0 sshd-session[31552]: Received disconnect from 103.103.245.7 port 34250:11: Bye Bye [preauth]
Feb 19 19:45:24 compute-0 sshd-session[31552]: Disconnected from authenticating user root 103.103.245.7 port 34250 [preauth]
Feb 19 19:46:19 compute-0 sshd-session[31554]: Invalid user gituser from 125.31.2.160 port 41614
Feb 19 19:46:19 compute-0 sshd-session[31554]: Received disconnect from 125.31.2.160 port 41614:11: Bye Bye [preauth]
Feb 19 19:46:19 compute-0 sshd-session[31554]: Disconnected from invalid user gituser 125.31.2.160 port 41614 [preauth]
Feb 19 19:46:24 compute-0 sshd-session[31556]: Invalid user alex from 103.213.238.91 port 50630
Feb 19 19:46:25 compute-0 sshd-session[31556]: Received disconnect from 103.213.238.91 port 50630:11: Bye Bye [preauth]
Feb 19 19:46:25 compute-0 sshd-session[31556]: Disconnected from invalid user alex 103.213.238.91 port 50630 [preauth]
Feb 19 19:46:37 compute-0 sshd-session[31558]: Received disconnect from 158.174.210.161 port 52583:11: Bye Bye [preauth]
Feb 19 19:46:37 compute-0 sshd-session[31558]: Disconnected from authenticating user root 158.174.210.161 port 52583 [preauth]
Feb 19 19:46:47 compute-0 sshd-session[31560]: Accepted publickey for zuul from 192.168.122.30 port 40394 ssh2: ECDSA SHA256:U7+XUhHIIKxaxeCtrtx4n7poU9CMVA2TmDaaiHbw4x0
Feb 19 19:46:47 compute-0 systemd-logind[810]: New session 7 of user zuul.
Feb 19 19:46:47 compute-0 systemd[1]: Started Session 7 of User zuul.
Feb 19 19:46:47 compute-0 sshd-session[31560]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 19 19:46:49 compute-0 python3.9[31713]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 19 19:46:50 compute-0 sudo[31892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsxbrkltnpaytvabjfxapdgdnrvnskig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530410.3143818-27-227449674529787/AnsiballZ_command.py'
Feb 19 19:46:50 compute-0 sudo[31892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:46:51 compute-0 python3.9[31895]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:46:57 compute-0 sudo[31892]: pam_unix(sudo:session): session closed for user root
Feb 19 19:46:57 compute-0 sshd-session[31563]: Connection closed by 192.168.122.30 port 40394
Feb 19 19:46:57 compute-0 sshd-session[31560]: pam_unix(sshd:session): session closed for user zuul
Feb 19 19:46:57 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Feb 19 19:46:57 compute-0 systemd[1]: session-7.scope: Consumed 7.246s CPU time.
Feb 19 19:46:57 compute-0 systemd-logind[810]: Session 7 logged out. Waiting for processes to exit.
Feb 19 19:46:57 compute-0 systemd-logind[810]: Removed session 7.
Feb 19 19:47:03 compute-0 sshd-session[31952]: Accepted publickey for zuul from 192.168.122.30 port 50288 ssh2: ECDSA SHA256:U7+XUhHIIKxaxeCtrtx4n7poU9CMVA2TmDaaiHbw4x0
Feb 19 19:47:03 compute-0 systemd-logind[810]: New session 8 of user zuul.
Feb 19 19:47:03 compute-0 systemd[1]: Started Session 8 of User zuul.
Feb 19 19:47:03 compute-0 sshd-session[31952]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 19 19:47:04 compute-0 python3.9[32105]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 19 19:47:04 compute-0 sshd-session[31955]: Connection closed by 192.168.122.30 port 50288
Feb 19 19:47:04 compute-0 sshd-session[31952]: pam_unix(sshd:session): session closed for user zuul
Feb 19 19:47:04 compute-0 systemd-logind[810]: Session 8 logged out. Waiting for processes to exit.
Feb 19 19:47:04 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Feb 19 19:47:04 compute-0 systemd-logind[810]: Removed session 8.
Feb 19 19:47:20 compute-0 sshd-session[32134]: Accepted publickey for zuul from 192.168.122.30 port 49202 ssh2: ECDSA SHA256:U7+XUhHIIKxaxeCtrtx4n7poU9CMVA2TmDaaiHbw4x0
Feb 19 19:47:20 compute-0 systemd-logind[810]: New session 9 of user zuul.
Feb 19 19:47:20 compute-0 systemd[1]: Started Session 9 of User zuul.
Feb 19 19:47:20 compute-0 sshd-session[32134]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 19 19:47:21 compute-0 python3.9[32287]: ansible-ansible.legacy.ping Invoked with data=pong
Feb 19 19:47:22 compute-0 python3.9[32461]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 19 19:47:23 compute-0 sudo[32611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aegqmzdlyoqakockasqleffyklnfcisb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530442.9548457-40-264842912966173/AnsiballZ_command.py'
Feb 19 19:47:23 compute-0 sudo[32611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:47:23 compute-0 python3.9[32614]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:47:23 compute-0 sudo[32611]: pam_unix(sudo:session): session closed for user root
Feb 19 19:47:24 compute-0 sudo[32765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvbjqxuwlvykehsscqkcpvlianbhishi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530443.8239264-52-270718118666064/AnsiballZ_stat.py'
Feb 19 19:47:24 compute-0 sudo[32765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:47:24 compute-0 python3.9[32768]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 19:47:24 compute-0 sudo[32765]: pam_unix(sudo:session): session closed for user root
Feb 19 19:47:24 compute-0 sudo[32918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unqpjbmgzjtvqfqdsoekxdvsjyplsjxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530444.5233552-60-58888220837559/AnsiballZ_file.py'
Feb 19 19:47:24 compute-0 sudo[32918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:47:25 compute-0 python3.9[32921]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:47:25 compute-0 sudo[32918]: pam_unix(sudo:session): session closed for user root
Feb 19 19:47:25 compute-0 sudo[33071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vklwmtcyqugavfdtykfhlikwbcbuzrfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530445.2899003-68-153547911889621/AnsiballZ_stat.py'
Feb 19 19:47:25 compute-0 sudo[33071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:47:25 compute-0 python3.9[33074]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:47:25 compute-0 sudo[33071]: pam_unix(sudo:session): session closed for user root
Feb 19 19:47:26 compute-0 sudo[33195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqmkleesuukzcnihucfoetgjwibpbfbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530445.2899003-68-153547911889621/AnsiballZ_copy.py'
Feb 19 19:47:26 compute-0 sudo[33195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:47:26 compute-0 python3.9[33198]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1771530445.2899003-68-153547911889621/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:47:26 compute-0 sudo[33195]: pam_unix(sudo:session): session closed for user root
Feb 19 19:47:27 compute-0 sudo[33348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cysqtipjehwoqmyjcmhcxlhghigxajmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530446.7416768-83-252954734306148/AnsiballZ_setup.py'
Feb 19 19:47:27 compute-0 sudo[33348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:47:27 compute-0 python3.9[33351]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 19 19:47:27 compute-0 sudo[33348]: pam_unix(sudo:session): session closed for user root
Feb 19 19:47:27 compute-0 sudo[33505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxmnprobtzdzjlvgauhedndhfkzqtjjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530447.700247-91-87116760144076/AnsiballZ_file.py'
Feb 19 19:47:27 compute-0 sudo[33505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:47:28 compute-0 python3.9[33508]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:47:28 compute-0 sudo[33505]: pam_unix(sudo:session): session closed for user root
Feb 19 19:47:29 compute-0 sudo[33658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahtezxpoidgzblbthqbhjrdevgasidje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530448.8236954-100-162344846233420/AnsiballZ_file.py'
Feb 19 19:47:29 compute-0 sudo[33658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:47:29 compute-0 python3.9[33661]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:47:29 compute-0 sudo[33658]: pam_unix(sudo:session): session closed for user root
Feb 19 19:47:30 compute-0 python3.9[33811]: ansible-ansible.builtin.service_facts Invoked
Feb 19 19:47:32 compute-0 python3.9[34065]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:47:33 compute-0 python3.9[34215]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 19 19:47:35 compute-0 python3.9[34369]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 19 19:47:36 compute-0 sudo[34525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvvvvpwgqbxlshpvcglkucmehqgfytcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530456.1033833-148-51217067821293/AnsiballZ_setup.py'
Feb 19 19:47:36 compute-0 sudo[34525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:47:36 compute-0 python3.9[34528]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 19 19:47:36 compute-0 sudo[34525]: pam_unix(sudo:session): session closed for user root
Feb 19 19:47:37 compute-0 sudo[34610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrxqviiznbswdeslmimzlxwkznivbxvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530456.1033833-148-51217067821293/AnsiballZ_dnf.py'
Feb 19 19:47:37 compute-0 sudo[34610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:47:37 compute-0 python3.9[34613]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 19 19:47:59 compute-0 sshd-session[34730]: Received disconnect from 103.154.77.48 port 45282:11: Bye Bye [preauth]
Feb 19 19:47:59 compute-0 sshd-session[34730]: Disconnected from authenticating user root 103.154.77.48 port 45282 [preauth]
Feb 19 19:48:26 compute-0 systemd[1]: Reloading.
Feb 19 19:48:26 compute-0 systemd-rc-local-generator[34808]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 19:48:26 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Feb 19 19:48:27 compute-0 systemd[1]: Reloading.
Feb 19 19:48:27 compute-0 systemd-rc-local-generator[34858]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 19:48:27 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Feb 19 19:48:27 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Feb 19 19:48:27 compute-0 systemd[1]: Reloading.
Feb 19 19:48:27 compute-0 systemd-rc-local-generator[34911]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 19:48:27 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Feb 19 19:48:27 compute-0 dbus-broker-launch[778]: Noticed file-system modification, trigger reload.
Feb 19 19:48:27 compute-0 dbus-broker-launch[778]: Noticed file-system modification, trigger reload.
Feb 19 19:48:27 compute-0 dbus-broker-launch[778]: Noticed file-system modification, trigger reload.
Feb 19 19:49:35 compute-0 kernel: SELinux:  Converting 2728 SID table entries...
Feb 19 19:49:35 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Feb 19 19:49:35 compute-0 kernel: SELinux:  policy capability open_perms=1
Feb 19 19:49:35 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Feb 19 19:49:35 compute-0 kernel: SELinux:  policy capability always_check_network=0
Feb 19 19:49:35 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 19 19:49:35 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 19 19:49:35 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 19 19:49:36 compute-0 dbus-broker-launch[789]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Feb 19 19:49:36 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 19 19:49:36 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 19 19:49:36 compute-0 systemd[1]: Reloading.
Feb 19 19:49:36 compute-0 systemd-rc-local-generator[35234]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 19:49:36 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 19 19:49:37 compute-0 sudo[34610]: pam_unix(sudo:session): session closed for user root
Feb 19 19:49:37 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 19 19:49:37 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 19 19:49:37 compute-0 systemd[1]: run-r03344fb0ae1a430f853a41439f01590b.service: Deactivated successfully.
Feb 19 19:49:37 compute-0 sudo[36165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ephhjlyvhjwafuvyjilrdlllwsroyzpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530577.5559752-160-204720276819097/AnsiballZ_command.py'
Feb 19 19:49:37 compute-0 sudo[36165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:49:37 compute-0 python3.9[36168]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:49:38 compute-0 sudo[36165]: pam_unix(sudo:session): session closed for user root
Feb 19 19:49:39 compute-0 sudo[36447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvznaafxutfdhwjcfgykjdgdqvhkjoqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530579.079418-168-26351899702125/AnsiballZ_selinux.py'
Feb 19 19:49:39 compute-0 sudo[36447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:49:40 compute-0 python3.9[36450]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Feb 19 19:49:40 compute-0 sudo[36447]: pam_unix(sudo:session): session closed for user root
Feb 19 19:49:41 compute-0 sudo[36600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmpgcttzpfssrvfsowafvuyytmqijcmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530580.556363-179-276212057903136/AnsiballZ_command.py'
Feb 19 19:49:41 compute-0 sudo[36600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:49:41 compute-0 python3.9[36603]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Feb 19 19:49:42 compute-0 sudo[36600]: pam_unix(sudo:session): session closed for user root
Feb 19 19:49:42 compute-0 sudo[36754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cllhtbrurguohlsapinecydtuqztnwot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530582.4531262-187-58972511236281/AnsiballZ_file.py'
Feb 19 19:49:42 compute-0 sudo[36754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:49:43 compute-0 python3.9[36757]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:49:43 compute-0 sudo[36754]: pam_unix(sudo:session): session closed for user root
Feb 19 19:49:44 compute-0 sudo[36907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywotqxqtaxixsctrpehcuykzxmjxcebx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530583.7710333-195-139474410317523/AnsiballZ_mount.py'
Feb 19 19:49:44 compute-0 sudo[36907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:49:44 compute-0 python3.9[36910]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Feb 19 19:49:44 compute-0 sudo[36907]: pam_unix(sudo:session): session closed for user root
Feb 19 19:49:45 compute-0 sudo[37060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzwscdrtfjrfinnftehoeshzojilxuwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530585.1120055-223-194472042948755/AnsiballZ_file.py'
Feb 19 19:49:45 compute-0 sudo[37060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:49:45 compute-0 python3.9[37063]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:49:45 compute-0 sudo[37060]: pam_unix(sudo:session): session closed for user root
Feb 19 19:49:45 compute-0 sudo[37213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcemdsypkeicbioehakhtfwgdzooundo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530585.6937613-231-78141180837360/AnsiballZ_stat.py'
Feb 19 19:49:45 compute-0 sudo[37213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:49:46 compute-0 python3.9[37216]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:49:46 compute-0 sudo[37213]: pam_unix(sudo:session): session closed for user root
Feb 19 19:49:46 compute-0 sudo[37337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtchmsdfatzbxlyozsnluwduwtnhzgyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530585.6937613-231-78141180837360/AnsiballZ_copy.py'
Feb 19 19:49:46 compute-0 sudo[37337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:49:46 compute-0 python3.9[37340]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771530585.6937613-231-78141180837360/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=883d2325b92facb0e9361f2e504f1e3488746e77 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:49:46 compute-0 sudo[37337]: pam_unix(sudo:session): session closed for user root
Feb 19 19:49:47 compute-0 sudo[37490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnusrupnybgxcginhumegiydfclqvfxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530586.956177-255-70869533808219/AnsiballZ_stat.py'
Feb 19 19:49:47 compute-0 sudo[37490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:49:47 compute-0 python3.9[37493]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 19:49:47 compute-0 sudo[37490]: pam_unix(sudo:session): session closed for user root
Feb 19 19:49:47 compute-0 sudo[37643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdqpqkbwvnjlqtrqaeiubhnolfrlydak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530587.4782465-263-160425820016062/AnsiballZ_command.py'
Feb 19 19:49:47 compute-0 sudo[37643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:49:50 compute-0 sshd-session[37648]: Invalid user dixi from 125.31.2.160 port 47012
Feb 19 19:49:50 compute-0 sshd-session[37648]: Received disconnect from 125.31.2.160 port 47012:11: Bye Bye [preauth]
Feb 19 19:49:50 compute-0 sshd-session[37648]: Disconnected from invalid user dixi 125.31.2.160 port 47012 [preauth]
Feb 19 19:49:51 compute-0 python3.9[37646]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:49:51 compute-0 sudo[37643]: pam_unix(sudo:session): session closed for user root
Feb 19 19:49:51 compute-0 sudo[37800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdyaplttqmdegwknuvfohknvvjepzurp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530591.7605822-271-113078454693995/AnsiballZ_file.py'
Feb 19 19:49:51 compute-0 sudo[37800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:49:52 compute-0 python3.9[37803]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:49:52 compute-0 sudo[37800]: pam_unix(sudo:session): session closed for user root
Feb 19 19:49:52 compute-0 sudo[37953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgitlpzrbojvmkikcnwnwqvyifqibygb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530592.4908383-282-125904392100766/AnsiballZ_getent.py'
Feb 19 19:49:52 compute-0 sudo[37953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:49:53 compute-0 python3.9[37956]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Feb 19 19:49:53 compute-0 sudo[37953]: pam_unix(sudo:session): session closed for user root
Feb 19 19:49:53 compute-0 rsyslogd[1014]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 19 19:49:53 compute-0 rsyslogd[1014]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 19 19:49:53 compute-0 sudo[38108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thycjwaumeicnvctqdmrcsigipeodkwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530593.1979742-290-259472574715113/AnsiballZ_group.py'
Feb 19 19:49:53 compute-0 sudo[38108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:49:53 compute-0 python3.9[38111]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 19 19:49:53 compute-0 groupadd[38112]: group added to /etc/group: name=qemu, GID=107
Feb 19 19:49:53 compute-0 groupadd[38112]: group added to /etc/gshadow: name=qemu
Feb 19 19:49:53 compute-0 groupadd[38112]: new group: name=qemu, GID=107
Feb 19 19:49:53 compute-0 sudo[38108]: pam_unix(sudo:session): session closed for user root
Feb 19 19:49:54 compute-0 sudo[38267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbiiszcugbdvkajppntojsjvsvvbcjzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530593.9890487-298-39013724552265/AnsiballZ_user.py'
Feb 19 19:49:54 compute-0 sudo[38267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:49:54 compute-0 python3.9[38270]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb 19 19:49:54 compute-0 useradd[38272]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/1
Feb 19 19:49:54 compute-0 sudo[38267]: pam_unix(sudo:session): session closed for user root
Feb 19 19:49:55 compute-0 sudo[38428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zitzuoutdhgshjmmkcychkehfafvsani ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530594.8492823-306-135997012185769/AnsiballZ_getent.py'
Feb 19 19:49:55 compute-0 sudo[38428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:49:55 compute-0 python3.9[38431]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Feb 19 19:49:55 compute-0 sudo[38428]: pam_unix(sudo:session): session closed for user root
Feb 19 19:49:55 compute-0 sudo[38582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uogsdclyzqnlvogfazrvheczpnaqpjjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530595.346974-314-56727628640282/AnsiballZ_group.py'
Feb 19 19:49:55 compute-0 sudo[38582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:49:55 compute-0 python3.9[38585]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 19 19:49:55 compute-0 groupadd[38586]: group added to /etc/group: name=hugetlbfs, GID=42477
Feb 19 19:49:55 compute-0 groupadd[38586]: group added to /etc/gshadow: name=hugetlbfs
Feb 19 19:49:55 compute-0 groupadd[38586]: new group: name=hugetlbfs, GID=42477
Feb 19 19:49:55 compute-0 sudo[38582]: pam_unix(sudo:session): session closed for user root
Feb 19 19:49:56 compute-0 sudo[38741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stufdnvvkpoagizkptuwavumbeocemfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530595.9656315-323-199032650066059/AnsiballZ_file.py'
Feb 19 19:49:56 compute-0 sudo[38741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:49:56 compute-0 python3.9[38744]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Feb 19 19:49:56 compute-0 sudo[38741]: pam_unix(sudo:session): session closed for user root
Feb 19 19:49:58 compute-0 sudo[38894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mikuznvnwidweoeytrlrolqehkhqsiew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530596.8648064-334-78245196516850/AnsiballZ_dnf.py'
Feb 19 19:49:58 compute-0 sudo[38894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:49:58 compute-0 python3.9[38897]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 19 19:50:00 compute-0 sudo[38894]: pam_unix(sudo:session): session closed for user root
Feb 19 19:50:01 compute-0 sshd-session[38899]: Invalid user varnish from 103.213.238.91 port 56614
Feb 19 19:50:01 compute-0 sshd-session[38899]: Received disconnect from 103.213.238.91 port 56614:11: Bye Bye [preauth]
Feb 19 19:50:01 compute-0 sshd-session[38899]: Disconnected from invalid user varnish 103.213.238.91 port 56614 [preauth]
Feb 19 19:50:02 compute-0 sudo[39050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkrpjplpmpblbttsowchszjukehdsuve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530600.239638-342-10318274361218/AnsiballZ_file.py'
Feb 19 19:50:02 compute-0 sudo[39050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:50:02 compute-0 python3.9[39053]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:50:02 compute-0 sudo[39050]: pam_unix(sudo:session): session closed for user root
Feb 19 19:50:04 compute-0 sudo[39203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elaftmfjuygftvikpvdgyloqobxjwjbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530603.918456-350-64809261127768/AnsiballZ_stat.py'
Feb 19 19:50:04 compute-0 sudo[39203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:50:04 compute-0 python3.9[39206]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:50:04 compute-0 sudo[39203]: pam_unix(sudo:session): session closed for user root
Feb 19 19:50:06 compute-0 sudo[39327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnxbgvvenfwdiurvswqmlgfunfhymned ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530603.918456-350-64809261127768/AnsiballZ_copy.py'
Feb 19 19:50:06 compute-0 sudo[39327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:50:06 compute-0 python3.9[39330]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771530603.918456-350-64809261127768/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:50:06 compute-0 sudo[39327]: pam_unix(sudo:session): session closed for user root
Feb 19 19:50:07 compute-0 sudo[39480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plqxtbrocudvwsocfngvageijtfwkjib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530606.4945405-365-259905093528828/AnsiballZ_systemd.py'
Feb 19 19:50:07 compute-0 sudo[39480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:50:07 compute-0 python3.9[39483]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 19 19:50:07 compute-0 systemd[1]: Starting Load Kernel Modules...
Feb 19 19:50:08 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Feb 19 19:50:08 compute-0 kernel: Bridge firewalling registered
Feb 19 19:50:08 compute-0 systemd-modules-load[39487]: Inserted module 'br_netfilter'
Feb 19 19:50:08 compute-0 systemd[1]: Finished Load Kernel Modules.
Feb 19 19:50:08 compute-0 sudo[39480]: pam_unix(sudo:session): session closed for user root
Feb 19 19:50:08 compute-0 sudo[39640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtofxychppucoqlwhrpxfyyxzndlubhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530608.2077148-373-253236468040625/AnsiballZ_stat.py'
Feb 19 19:50:08 compute-0 sudo[39640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:50:09 compute-0 python3.9[39643]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:50:09 compute-0 sudo[39640]: pam_unix(sudo:session): session closed for user root
Feb 19 19:50:09 compute-0 sudo[39764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odvveuxzyovqfjkhaxndzxrdwbwppzmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530608.2077148-373-253236468040625/AnsiballZ_copy.py'
Feb 19 19:50:09 compute-0 sudo[39764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:50:09 compute-0 python3.9[39767]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771530608.2077148-373-253236468040625/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:50:09 compute-0 sudo[39764]: pam_unix(sudo:session): session closed for user root
Feb 19 19:50:10 compute-0 sudo[39917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iusiqkmjgkwrhajfiqqfvcjksbdagpmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530609.9983394-391-12926631510314/AnsiballZ_dnf.py'
Feb 19 19:50:10 compute-0 sudo[39917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:50:10 compute-0 python3.9[39920]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 19 19:50:13 compute-0 dbus-broker-launch[778]: Noticed file-system modification, trigger reload.
Feb 19 19:50:14 compute-0 dbus-broker-launch[778]: Noticed file-system modification, trigger reload.
Feb 19 19:50:14 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 19 19:50:14 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 19 19:50:14 compute-0 systemd[1]: Reloading.
Feb 19 19:50:14 compute-0 systemd-rc-local-generator[39980]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 19:50:14 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 19 19:50:15 compute-0 sudo[39917]: pam_unix(sudo:session): session closed for user root
Feb 19 19:50:15 compute-0 python3.9[41504]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 19:50:16 compute-0 python3.9[43346]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Feb 19 19:50:17 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 19 19:50:17 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 19 19:50:17 compute-0 systemd[1]: man-db-cache-update.service: Consumed 3.185s CPU time.
Feb 19 19:50:17 compute-0 systemd[1]: run-rfa7d1a44a1594c47b418de8466ed13e5.service: Deactivated successfully.
Feb 19 19:50:17 compute-0 python3.9[44012]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 19:50:18 compute-0 sudo[44162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqvjzwendjzetmstzbahbodfeepvcsfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530618.0860977-430-47966021776297/AnsiballZ_command.py'
Feb 19 19:50:18 compute-0 sudo[44162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:50:18 compute-0 python3.9[44165]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:50:18 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Feb 19 19:50:18 compute-0 systemd[1]: Starting Authorization Manager...
Feb 19 19:50:18 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Feb 19 19:50:18 compute-0 polkitd[44382]: Started polkitd version 0.117
Feb 19 19:50:18 compute-0 polkitd[44382]: Loading rules from directory /etc/polkit-1/rules.d
Feb 19 19:50:18 compute-0 polkitd[44382]: Loading rules from directory /usr/share/polkit-1/rules.d
Feb 19 19:50:18 compute-0 polkitd[44382]: Finished loading, compiling and executing 2 rules
Feb 19 19:50:18 compute-0 polkitd[44382]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Feb 19 19:50:18 compute-0 systemd[1]: Started Authorization Manager.
Feb 19 19:50:19 compute-0 sudo[44162]: pam_unix(sudo:session): session closed for user root
Feb 19 19:50:19 compute-0 sudo[44550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjgyktpvupgfbajbayilmumsyvlsqnzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530619.2200143-439-255769384274741/AnsiballZ_systemd.py'
Feb 19 19:50:19 compute-0 sudo[44550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:50:19 compute-0 python3.9[44553]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 19:50:19 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Feb 19 19:50:19 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Feb 19 19:50:19 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Feb 19 19:50:19 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Feb 19 19:50:20 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Feb 19 19:50:20 compute-0 sudo[44550]: pam_unix(sudo:session): session closed for user root
Feb 19 19:50:20 compute-0 python3.9[44714]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Feb 19 19:50:22 compute-0 sudo[44864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msswxhilezhzwnoqvwfxgxmcwpadbhsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530622.6686833-496-245280390258541/AnsiballZ_systemd.py'
Feb 19 19:50:22 compute-0 sudo[44864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:50:23 compute-0 python3.9[44867]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 19:50:23 compute-0 systemd[1]: Reloading.
Feb 19 19:50:23 compute-0 systemd-rc-local-generator[44895]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 19:50:23 compute-0 sudo[44864]: pam_unix(sudo:session): session closed for user root
Feb 19 19:50:24 compute-0 sudo[45061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlnbvahtuxgxtywogkcvnsykkfpqohok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530623.807781-496-205813151136380/AnsiballZ_systemd.py'
Feb 19 19:50:24 compute-0 sudo[45061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:50:24 compute-0 python3.9[45064]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 19:50:24 compute-0 systemd[1]: Reloading.
Feb 19 19:50:24 compute-0 systemd-rc-local-generator[45089]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 19:50:24 compute-0 sudo[45061]: pam_unix(sudo:session): session closed for user root
Feb 19 19:50:25 compute-0 sudo[45258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsilnkwkeupxkqoffyiccsiufmqbbene ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530625.1918044-512-227588218936246/AnsiballZ_command.py'
Feb 19 19:50:25 compute-0 sudo[45258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:50:25 compute-0 python3.9[45261]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:50:25 compute-0 sudo[45258]: pam_unix(sudo:session): session closed for user root
Feb 19 19:50:25 compute-0 sudo[45412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sczconuqldeuqvbxcuhrpxficttuivsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530625.7439091-520-275114632504342/AnsiballZ_command.py'
Feb 19 19:50:25 compute-0 sudo[45412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:50:26 compute-0 python3.9[45415]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:50:26 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Feb 19 19:50:26 compute-0 sudo[45412]: pam_unix(sudo:session): session closed for user root
Feb 19 19:50:26 compute-0 sudo[45566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpuajnihenxevpzdmptzxadfrckcwwqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530626.342209-528-13261369681384/AnsiballZ_command.py'
Feb 19 19:50:26 compute-0 sudo[45566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:50:27 compute-0 python3.9[45569]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:50:29 compute-0 sudo[45566]: pam_unix(sudo:session): session closed for user root
Feb 19 19:50:29 compute-0 sudo[45731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzwcmddijigdusqrbvshvemuiozimprx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530629.2831407-536-63585010499839/AnsiballZ_command.py'
Feb 19 19:50:29 compute-0 sudo[45731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:50:29 compute-0 python3.9[45734]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:50:29 compute-0 sudo[45731]: pam_unix(sudo:session): session closed for user root
Feb 19 19:50:31 compute-0 sudo[45885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clhmerhztpjmumzeeegpkpfwmcojyizj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530630.3393023-544-33251265944703/AnsiballZ_systemd.py'
Feb 19 19:50:31 compute-0 sudo[45885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:50:31 compute-0 sshd-session[45679]: Invalid user n8n from 158.174.210.161 port 30921
Feb 19 19:50:31 compute-0 sshd-session[45679]: Received disconnect from 158.174.210.161 port 30921:11: Bye Bye [preauth]
Feb 19 19:50:31 compute-0 sshd-session[45679]: Disconnected from invalid user n8n 158.174.210.161 port 30921 [preauth]
Feb 19 19:50:31 compute-0 python3.9[45888]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 19 19:50:31 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 19 19:50:31 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Feb 19 19:50:31 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Feb 19 19:50:31 compute-0 systemd[1]: Starting Apply Kernel Variables...
Feb 19 19:50:31 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Feb 19 19:50:31 compute-0 systemd[1]: Finished Apply Kernel Variables.
Feb 19 19:50:31 compute-0 sudo[45885]: pam_unix(sudo:session): session closed for user root
Feb 19 19:50:31 compute-0 sshd-session[32137]: Connection closed by 192.168.122.30 port 49202
Feb 19 19:50:31 compute-0 sshd-session[32134]: pam_unix(sshd:session): session closed for user zuul
Feb 19 19:50:31 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Feb 19 19:50:31 compute-0 systemd[1]: session-9.scope: Consumed 2min 1.971s CPU time.
Feb 19 19:50:31 compute-0 systemd-logind[810]: Session 9 logged out. Waiting for processes to exit.
Feb 19 19:50:31 compute-0 systemd-logind[810]: Removed session 9.
Feb 19 19:50:37 compute-0 sshd-session[45918]: Accepted publickey for zuul from 192.168.122.30 port 45718 ssh2: ECDSA SHA256:U7+XUhHIIKxaxeCtrtx4n7poU9CMVA2TmDaaiHbw4x0
Feb 19 19:50:37 compute-0 systemd-logind[810]: New session 10 of user zuul.
Feb 19 19:50:37 compute-0 systemd[1]: Started Session 10 of User zuul.
Feb 19 19:50:37 compute-0 sshd-session[45918]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 19 19:50:38 compute-0 python3.9[46071]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 19 19:50:39 compute-0 python3.9[46225]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 19 19:50:42 compute-0 sudo[46379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajxgbnkubehzlhhyouyjfangudagrsdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530641.918925-45-7300196093500/AnsiballZ_command.py'
Feb 19 19:50:42 compute-0 sudo[46379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:50:42 compute-0 python3.9[46382]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:50:42 compute-0 sudo[46379]: pam_unix(sudo:session): session closed for user root
Feb 19 19:50:44 compute-0 python3.9[46533]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 19 19:50:44 compute-0 sudo[46687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtjfcnueimxfuxvsmeskysfmpnavpqhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530644.3097446-65-234389818929088/AnsiballZ_setup.py'
Feb 19 19:50:44 compute-0 sudo[46687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:50:44 compute-0 python3.9[46690]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 19 19:50:45 compute-0 sudo[46687]: pam_unix(sudo:session): session closed for user root
Feb 19 19:50:45 compute-0 sudo[46772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhrxtkcdjlxutglmkmaunnpmdfrekgay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530644.3097446-65-234389818929088/AnsiballZ_dnf.py'
Feb 19 19:50:45 compute-0 sudo[46772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:50:45 compute-0 python3.9[46775]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 19 19:50:46 compute-0 sudo[46772]: pam_unix(sudo:session): session closed for user root
Feb 19 19:50:47 compute-0 sudo[46926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oavyowerrbfzmmmxplvlewbmwubhpppl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530646.898232-77-182573106082727/AnsiballZ_setup.py'
Feb 19 19:50:47 compute-0 sudo[46926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:50:47 compute-0 python3.9[46929]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 19 19:50:47 compute-0 sudo[46926]: pam_unix(sudo:session): session closed for user root
Feb 19 19:50:48 compute-0 sudo[47098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unzwzrnwuchrvdnljmtpnxdosxrmunvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530647.8038363-88-57729919113070/AnsiballZ_file.py'
Feb 19 19:50:48 compute-0 sudo[47098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:50:48 compute-0 python3.9[47101]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:50:48 compute-0 sudo[47098]: pam_unix(sudo:session): session closed for user root
Feb 19 19:50:48 compute-0 sudo[47251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qouandepwqiejolssofrfsdixaycouqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530648.4870262-96-225748880236390/AnsiballZ_command.py'
Feb 19 19:50:48 compute-0 sudo[47251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:50:48 compute-0 python3.9[47254]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:50:49 compute-0 podman[47255]: 2026-02-19 19:50:49.253611329 +0000 UTC m=+0.356299290 system refresh
Feb 19 19:50:49 compute-0 sudo[47251]: pam_unix(sudo:session): session closed for user root
Feb 19 19:50:49 compute-0 sudo[47415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqshayqzwyilbjnietbzjdzkzereurha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530649.420894-104-143878839463326/AnsiballZ_stat.py'
Feb 19 19:50:49 compute-0 sudo[47415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:50:50 compute-0 python3.9[47418]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:50:50 compute-0 sudo[47415]: pam_unix(sudo:session): session closed for user root
Feb 19 19:50:50 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 19 19:50:50 compute-0 sudo[47539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyrywchjtdzxcvfhreejxddwcxrsyijc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530649.420894-104-143878839463326/AnsiballZ_copy.py'
Feb 19 19:50:50 compute-0 sudo[47539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:50:50 compute-0 python3.9[47542]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771530649.420894-104-143878839463326/.source.json follow=False _original_basename=podman_network_config.j2 checksum=c51e4ce1cf54f0c787e51600d1edc16787cd2877 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:50:50 compute-0 sudo[47539]: pam_unix(sudo:session): session closed for user root
Feb 19 19:50:51 compute-0 sudo[47692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvbwvkeiidbjlqpzxaatfjtpuwhflngb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530650.8629913-119-109439818391359/AnsiballZ_stat.py'
Feb 19 19:50:51 compute-0 sudo[47692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:50:51 compute-0 python3.9[47695]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:50:51 compute-0 sudo[47692]: pam_unix(sudo:session): session closed for user root
Feb 19 19:50:51 compute-0 sudo[47816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpqohhrvntufqvfinblcohzqyotxbvmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530650.8629913-119-109439818391359/AnsiballZ_copy.py'
Feb 19 19:50:51 compute-0 sudo[47816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:50:51 compute-0 python3.9[47819]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771530650.8629913-119-109439818391359/.source.conf follow=False _original_basename=registries.conf.j2 checksum=485c636425e28137b9c2e788e9d5fc748a88106d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:50:51 compute-0 sudo[47816]: pam_unix(sudo:session): session closed for user root
Feb 19 19:50:52 compute-0 sudo[47969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wresakvwfuyubccxkrarcgwdiaqmzsgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530651.9057689-135-178954868318862/AnsiballZ_ini_file.py'
Feb 19 19:50:52 compute-0 sudo[47969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:50:52 compute-0 python3.9[47972]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:50:52 compute-0 sudo[47969]: pam_unix(sudo:session): session closed for user root
Feb 19 19:50:52 compute-0 sudo[48122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvqiemhzinhryflffficgirjotyhbtva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530652.585816-135-180792120557796/AnsiballZ_ini_file.py'
Feb 19 19:50:52 compute-0 sudo[48122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:50:52 compute-0 python3.9[48125]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:50:52 compute-0 sudo[48122]: pam_unix(sudo:session): session closed for user root
Feb 19 19:50:53 compute-0 sudo[48275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khbrdtkadknupiwqhxeassbglvotggru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530653.0974379-135-147042354178176/AnsiballZ_ini_file.py'
Feb 19 19:50:53 compute-0 sudo[48275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:50:53 compute-0 python3.9[48278]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:50:53 compute-0 sudo[48275]: pam_unix(sudo:session): session closed for user root
Feb 19 19:50:53 compute-0 sudo[48428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-indpdfpumptidbyrtmruyspbomxqvkdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530653.6727993-135-121810940266469/AnsiballZ_ini_file.py'
Feb 19 19:50:53 compute-0 sudo[48428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:50:54 compute-0 python3.9[48431]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:50:54 compute-0 sudo[48428]: pam_unix(sudo:session): session closed for user root
Feb 19 19:50:54 compute-0 python3.9[48581]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 19 19:50:55 compute-0 sudo[48733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymbymfybwilqeymqedmilqrfxqxltjoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530654.9990137-175-224825624236021/AnsiballZ_dnf.py'
Feb 19 19:50:55 compute-0 sudo[48733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:50:55 compute-0 python3.9[48736]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 19 19:50:57 compute-0 sudo[48733]: pam_unix(sudo:session): session closed for user root
Feb 19 19:50:57 compute-0 sudo[48887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dieyzxrbqvvecvttoceiaejwzwivuxfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530657.3914788-183-36492982972011/AnsiballZ_dnf.py'
Feb 19 19:50:57 compute-0 sudo[48887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:50:57 compute-0 python3.9[48890]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 19 19:50:59 compute-0 sudo[48887]: pam_unix(sudo:session): session closed for user root
Feb 19 19:51:00 compute-0 sshd-session[48892]: Invalid user systemd from 103.154.77.48 port 44200
Feb 19 19:51:00 compute-0 sudo[49051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oufufqlyiwgewewkroyatiryjchrzgxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530659.9373767-193-100880819173689/AnsiballZ_dnf.py'
Feb 19 19:51:00 compute-0 sudo[49051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:51:00 compute-0 sshd-session[48892]: Received disconnect from 103.154.77.48 port 44200:11: Bye Bye [preauth]
Feb 19 19:51:00 compute-0 sshd-session[48892]: Disconnected from invalid user systemd 103.154.77.48 port 44200 [preauth]
Feb 19 19:51:00 compute-0 python3.9[49054]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 19 19:51:01 compute-0 sudo[49051]: pam_unix(sudo:session): session closed for user root
Feb 19 19:51:02 compute-0 sudo[49205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvfebsvgueojppnpdxghjppkuzrbdznp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530662.0372784-202-253385841188666/AnsiballZ_dnf.py'
Feb 19 19:51:02 compute-0 sudo[49205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:51:02 compute-0 python3.9[49208]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 19 19:51:03 compute-0 sudo[49205]: pam_unix(sudo:session): session closed for user root
Feb 19 19:51:04 compute-0 sudo[49359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsubgdhzeoorjohmndzyznevfcgcrqhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530664.3932908-213-53304220960223/AnsiballZ_dnf.py'
Feb 19 19:51:04 compute-0 sudo[49359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:51:04 compute-0 python3.9[49362]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['NetworkManager-ovs'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 19 19:51:06 compute-0 sudo[49359]: pam_unix(sudo:session): session closed for user root
Feb 19 19:51:06 compute-0 sudo[49516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfpzljbmdgcmkavwebyfeourdbnjnqkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530666.4852774-221-245853507544137/AnsiballZ_dnf.py'
Feb 19 19:51:06 compute-0 sudo[49516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:51:06 compute-0 python3.9[49519]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 19 19:51:13 compute-0 sudo[49516]: pam_unix(sudo:session): session closed for user root
Feb 19 19:51:14 compute-0 sudo[49686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blhdxpefvbihqddirjojliqiocfvhjfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530673.9469378-230-173514223613793/AnsiballZ_dnf.py'
Feb 19 19:51:14 compute-0 sudo[49686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:51:14 compute-0 python3.9[49689]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 19 19:51:15 compute-0 sudo[49686]: pam_unix(sudo:session): session closed for user root
Feb 19 19:51:16 compute-0 sudo[49840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujebhjamsdqcgvrsrdaaphirdujrkifs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530675.9992092-239-117392069895047/AnsiballZ_dnf.py'
Feb 19 19:51:16 compute-0 sudo[49840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:51:16 compute-0 python3.9[49843]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 19 19:51:39 compute-0 sudo[49840]: pam_unix(sudo:session): session closed for user root
Feb 19 19:51:39 compute-0 sudo[50176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvezhduidesuyjivyrnyyoppptgspess ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530699.4204328-248-99931258443856/AnsiballZ_dnf.py'
Feb 19 19:51:39 compute-0 sudo[50176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:51:39 compute-0 python3.9[50179]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['iscsi-initiator-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 19 19:51:41 compute-0 sudo[50176]: pam_unix(sudo:session): session closed for user root
Feb 19 19:51:41 compute-0 sudo[50333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsqkxybffsznimnosstoosnjgoyvtnao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530701.336129-258-230774223843139/AnsiballZ_dnf.py'
Feb 19 19:51:41 compute-0 sudo[50333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:51:41 compute-0 python3.9[50336]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['device-mapper-multipath'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 19 19:51:43 compute-0 sudo[50333]: pam_unix(sudo:session): session closed for user root
Feb 19 19:51:44 compute-0 sudo[50491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-butmhdmhhrqvspwnvcmlrioudrgszfrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530704.101461-269-28685879246303/AnsiballZ_file.py'
Feb 19 19:51:44 compute-0 sudo[50491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:51:44 compute-0 python3.9[50494]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:51:44 compute-0 sudo[50491]: pam_unix(sudo:session): session closed for user root
Feb 19 19:51:44 compute-0 sudo[50667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkildhdhiaqyukkhzjxjdociqaplzqiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530704.650537-277-246067352765261/AnsiballZ_stat.py'
Feb 19 19:51:44 compute-0 sudo[50667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:51:45 compute-0 python3.9[50670]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:51:45 compute-0 sudo[50667]: pam_unix(sudo:session): session closed for user root
Feb 19 19:51:45 compute-0 sudo[50791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqshzbtwbrnzmzipgebdkkvfdgozpovc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530704.650537-277-246067352765261/AnsiballZ_copy.py'
Feb 19 19:51:45 compute-0 sudo[50791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:51:45 compute-0 python3.9[50794]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1771530704.650537-277-246067352765261/.source.json _original_basename=.ipwdhla_ follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:51:45 compute-0 sudo[50791]: pam_unix(sudo:session): session closed for user root
Feb 19 19:51:46 compute-0 sudo[50944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlzzfdeksyzecrxdzjdsxebpydbcbmpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530705.8340158-295-207757430776361/AnsiballZ_podman_image.py'
Feb 19 19:51:46 compute-0 sudo[50944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:51:46 compute-0 python3.9[50947]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Feb 19 19:51:46 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 19 19:51:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat1743586426-lower\x2dmapped.mount: Deactivated successfully.
Feb 19 19:51:51 compute-0 podman[50959]: 2026-02-19 19:51:51.541476226 +0000 UTC m=+4.984179300 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Feb 19 19:51:51 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 19 19:51:51 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 19 19:51:51 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 19 19:51:51 compute-0 sudo[50944]: pam_unix(sudo:session): session closed for user root
Feb 19 19:51:52 compute-0 sudo[51253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppwpzfabtsjvcjqkufacunqqtdmaejov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530712.0066962-306-262448093329861/AnsiballZ_podman_image.py'
Feb 19 19:51:52 compute-0 sudo[51253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:51:52 compute-0 python3.9[51256]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Feb 19 19:52:03 compute-0 podman[51268]: 2026-02-19 19:52:03.423035373 +0000 UTC m=+10.956901735 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 19 19:52:03 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 19 19:52:03 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 19 19:52:03 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 19 19:52:03 compute-0 sudo[51253]: pam_unix(sudo:session): session closed for user root
Feb 19 19:52:04 compute-0 sudo[51561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykhspmtjsfffjzevluvjlfxbhvgyjgjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530723.9599535-316-57018428626972/AnsiballZ_podman_image.py'
Feb 19 19:52:04 compute-0 sudo[51561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:52:04 compute-0 python3.9[51564]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Feb 19 19:52:04 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 19 19:52:15 compute-0 podman[51575]: 2026-02-19 19:52:15.473099721 +0000 UTC m=+11.012137735 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Feb 19 19:52:15 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 19 19:52:15 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 19 19:52:15 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 19 19:52:15 compute-0 sudo[51561]: pam_unix(sudo:session): session closed for user root
Feb 19 19:52:16 compute-0 sudo[51832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-neyifaeotpzovrprhsqwnuujcnqqayrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530735.969859-327-71499004969551/AnsiballZ_podman_image.py'
Feb 19 19:52:16 compute-0 sudo[51832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:52:16 compute-0 python3.9[51835]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Feb 19 19:52:28 compute-0 podman[51847]: 2026-02-19 19:52:28.020919681 +0000 UTC m=+11.639654554 image pull 39912de8a1671ef27329277267076a00cb1dc71d7f9b7d4bbadf1cbd2c1f36c4 quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Feb 19 19:52:28 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 19 19:52:28 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 19 19:52:28 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 19 19:52:28 compute-0 sudo[51832]: pam_unix(sudo:session): session closed for user root
Feb 19 19:52:28 compute-0 sudo[52165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpzmgwmoronbnwnqnlzblmiqdfucffwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530748.3040571-327-29989420966642/AnsiballZ_podman_image.py'
Feb 19 19:52:28 compute-0 sudo[52165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:52:28 compute-0 python3.9[52168]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter:v1.5.0 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Feb 19 19:52:29 compute-0 podman[52180]: 2026-02-19 19:52:29.806650414 +0000 UTC m=+1.060758950 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Feb 19 19:52:29 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 19 19:52:29 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 19 19:52:29 compute-0 sudo[52165]: pam_unix(sudo:session): session closed for user root
Feb 19 19:52:30 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 19 19:52:30 compute-0 sudo[52450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oebkrkjwohrstyitkzzutgscnuurzvmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530750.167438-343-123243235510286/AnsiballZ_podman_image.py'
Feb 19 19:52:30 compute-0 sudo[52450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:52:30 compute-0 python3.9[52453]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Feb 19 19:52:30 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 19 19:52:34 compute-0 podman[52466]: 2026-02-19 19:52:34.371476729 +0000 UTC m=+3.734023434 image pull 5a0c248a731dc2e1754b1906fede374f0f92203547e5b10eb435ef1a64b36296 quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Feb 19 19:52:34 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 19 19:52:34 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 19 19:52:34 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 19 19:52:34 compute-0 sudo[52450]: pam_unix(sudo:session): session closed for user root
Feb 19 19:52:34 compute-0 sudo[52721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnhzyxrjugjzmkmtgqfbwqscrpjtndaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530754.6397753-343-8823631239615/AnsiballZ_podman_image.py'
Feb 19 19:52:34 compute-0 sudo[52721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:52:35 compute-0 python3.9[52724]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/sustainable_computing_io/kepler:release-0.7.12 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Feb 19 19:52:39 compute-0 podman[52736]: 2026-02-19 19:52:39.773643489 +0000 UTC m=+4.654358071 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Feb 19 19:52:39 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 19 19:52:39 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 19 19:52:39 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 19 19:52:39 compute-0 sudo[52721]: pam_unix(sudo:session): session closed for user root
Feb 19 19:52:40 compute-0 sshd-session[45921]: Connection closed by 192.168.122.30 port 45718
Feb 19 19:52:40 compute-0 sshd-session[45918]: pam_unix(sshd:session): session closed for user zuul
Feb 19 19:52:40 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Feb 19 19:52:40 compute-0 systemd[1]: session-10.scope: Consumed 2min 2.111s CPU time.
Feb 19 19:52:40 compute-0 systemd-logind[810]: Session 10 logged out. Waiting for processes to exit.
Feb 19 19:52:40 compute-0 systemd-logind[810]: Removed session 10.
Feb 19 19:52:46 compute-0 sshd-session[52979]: Accepted publickey for zuul from 192.168.122.30 port 43216 ssh2: ECDSA SHA256:U7+XUhHIIKxaxeCtrtx4n7poU9CMVA2TmDaaiHbw4x0
Feb 19 19:52:46 compute-0 systemd-logind[810]: New session 11 of user zuul.
Feb 19 19:52:46 compute-0 systemd[1]: Started Session 11 of User zuul.
Feb 19 19:52:46 compute-0 sshd-session[52979]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 19 19:52:47 compute-0 python3.9[53132]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 19 19:52:48 compute-0 sudo[53286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qipjeuutuonpuklgpktkwqaqtajercbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530768.5722616-32-23824674682496/AnsiballZ_getent.py'
Feb 19 19:52:48 compute-0 sudo[53286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:52:49 compute-0 python3.9[53289]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Feb 19 19:52:49 compute-0 sudo[53286]: pam_unix(sudo:session): session closed for user root
Feb 19 19:52:49 compute-0 sudo[53440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggwmnhcttozepsgwdjpiyziepmsxwccx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530769.2893312-40-255076061940424/AnsiballZ_group.py'
Feb 19 19:52:49 compute-0 sudo[53440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:52:49 compute-0 python3.9[53443]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 19 19:52:49 compute-0 groupadd[53444]: group added to /etc/group: name=openvswitch, GID=42476
Feb 19 19:52:49 compute-0 groupadd[53444]: group added to /etc/gshadow: name=openvswitch
Feb 19 19:52:49 compute-0 groupadd[53444]: new group: name=openvswitch, GID=42476
Feb 19 19:52:49 compute-0 sudo[53440]: pam_unix(sudo:session): session closed for user root
Feb 19 19:52:50 compute-0 sudo[53599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnykymbrtpmrxsiieqlkwgvdwucekchd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530769.9868731-48-204209451007969/AnsiballZ_user.py'
Feb 19 19:52:50 compute-0 sudo[53599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:52:50 compute-0 python3.9[53602]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb 19 19:52:50 compute-0 useradd[53604]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/1
Feb 19 19:52:50 compute-0 useradd[53604]: add 'openvswitch' to group 'hugetlbfs'
Feb 19 19:52:50 compute-0 useradd[53604]: add 'openvswitch' to shadow group 'hugetlbfs'
Feb 19 19:52:51 compute-0 sudo[53599]: pam_unix(sudo:session): session closed for user root
Feb 19 19:52:51 compute-0 sudo[53760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufepzqagqgiascopaligstvhflicgvub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530771.581406-58-280795264281860/AnsiballZ_setup.py'
Feb 19 19:52:51 compute-0 sudo[53760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:52:52 compute-0 python3.9[53763]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 19 19:52:52 compute-0 sudo[53760]: pam_unix(sudo:session): session closed for user root
Feb 19 19:52:52 compute-0 sudo[53845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veffeoloqsnwlsoopbtidfevrzbggrhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530771.581406-58-280795264281860/AnsiballZ_dnf.py'
Feb 19 19:52:52 compute-0 sudo[53845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:52:52 compute-0 python3.9[53848]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 19 19:52:54 compute-0 sudo[53845]: pam_unix(sudo:session): session closed for user root
Feb 19 19:52:55 compute-0 sudo[54008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiuvdahvinzjdcaxgwxxaqijlpdagrdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530774.8033507-72-162979310397606/AnsiballZ_dnf.py'
Feb 19 19:52:55 compute-0 sudo[54008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:52:55 compute-0 python3.9[54011]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 19 19:53:04 compute-0 sshd-session[54026]: Invalid user titu from 125.31.2.160 port 52388
Feb 19 19:53:04 compute-0 sshd-session[54026]: Received disconnect from 125.31.2.160 port 52388:11: Bye Bye [preauth]
Feb 19 19:53:04 compute-0 sshd-session[54026]: Disconnected from invalid user titu 125.31.2.160 port 52388 [preauth]
Feb 19 19:53:07 compute-0 kernel: SELinux:  Converting 2741 SID table entries...
Feb 19 19:53:07 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Feb 19 19:53:07 compute-0 kernel: SELinux:  policy capability open_perms=1
Feb 19 19:53:07 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Feb 19 19:53:07 compute-0 kernel: SELinux:  policy capability always_check_network=0
Feb 19 19:53:07 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 19 19:53:07 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 19 19:53:07 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 19 19:53:07 compute-0 groupadd[54036]: group added to /etc/group: name=unbound, GID=994
Feb 19 19:53:07 compute-0 groupadd[54036]: group added to /etc/gshadow: name=unbound
Feb 19 19:53:07 compute-0 groupadd[54036]: new group: name=unbound, GID=994
Feb 19 19:53:07 compute-0 useradd[54043]: new user: name=unbound, UID=993, GID=994, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Feb 19 19:53:07 compute-0 dbus-broker-launch[789]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Feb 19 19:53:07 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Feb 19 19:53:08 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 19 19:53:08 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 19 19:53:08 compute-0 systemd[1]: Reloading.
Feb 19 19:53:08 compute-0 systemd-rc-local-generator[54544]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 19:53:08 compute-0 systemd-sysv-generator[54548]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 19:53:08 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 19 19:53:09 compute-0 sudo[54008]: pam_unix(sudo:session): session closed for user root
Feb 19 19:53:09 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 19 19:53:09 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 19 19:53:09 compute-0 systemd[1]: run-r9cd3aef413e74370a995111f119d8053.service: Deactivated successfully.
Feb 19 19:53:09 compute-0 sudo[55135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koxhmrwlpdrfjkvxuhfrppioeqvkkjup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530789.348876-80-134686797266109/AnsiballZ_systemd.py'
Feb 19 19:53:09 compute-0 sudo[55135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:53:10 compute-0 python3.9[55138]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 19 19:53:10 compute-0 systemd[1]: Reloading.
Feb 19 19:53:10 compute-0 systemd-rc-local-generator[55166]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 19:53:10 compute-0 systemd-sysv-generator[55169]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 19:53:10 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Feb 19 19:53:10 compute-0 chown[55187]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Feb 19 19:53:10 compute-0 ovs-ctl[55192]: /etc/openvswitch/conf.db does not exist ... (warning).
Feb 19 19:53:10 compute-0 ovs-ctl[55192]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Feb 19 19:53:10 compute-0 ovs-ctl[55192]: Starting ovsdb-server [  OK  ]
Feb 19 19:53:10 compute-0 ovs-vsctl[55242]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Feb 19 19:53:10 compute-0 ovs-vsctl[55262]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"e2fe6bb6-fad0-4563-8388-215a30f03e3f\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Feb 19 19:53:10 compute-0 ovs-ctl[55192]: Configuring Open vSwitch system IDs [  OK  ]
Feb 19 19:53:10 compute-0 ovs-ctl[55192]: Enabling remote OVSDB managers [  OK  ]
Feb 19 19:53:10 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Feb 19 19:53:10 compute-0 ovs-vsctl[55268]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Feb 19 19:53:10 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Feb 19 19:53:10 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Feb 19 19:53:10 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Feb 19 19:53:10 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Feb 19 19:53:10 compute-0 ovs-ctl[55312]: Inserting openvswitch module [  OK  ]
Feb 19 19:53:11 compute-0 ovs-ctl[55281]: Starting ovs-vswitchd [  OK  ]
Feb 19 19:53:11 compute-0 ovs-vsctl[55330]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Feb 19 19:53:11 compute-0 ovs-ctl[55281]: Enabling remote OVSDB managers [  OK  ]
Feb 19 19:53:11 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Feb 19 19:53:11 compute-0 systemd[1]: Starting Open vSwitch...
Feb 19 19:53:11 compute-0 systemd[1]: Finished Open vSwitch.
Feb 19 19:53:11 compute-0 sudo[55135]: pam_unix(sudo:session): session closed for user root
Feb 19 19:53:11 compute-0 python3.9[55481]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 19 19:53:12 compute-0 sudo[55631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpbkofpufrpcrkynfcrkjtaxivcmqdlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530791.9979525-99-14220006163458/AnsiballZ_sefcontext.py'
Feb 19 19:53:12 compute-0 sudo[55631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:53:12 compute-0 python3.9[55634]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Feb 19 19:53:13 compute-0 kernel: SELinux:  Converting 2755 SID table entries...
Feb 19 19:53:13 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Feb 19 19:53:13 compute-0 kernel: SELinux:  policy capability open_perms=1
Feb 19 19:53:13 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Feb 19 19:53:13 compute-0 kernel: SELinux:  policy capability always_check_network=0
Feb 19 19:53:13 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 19 19:53:13 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 19 19:53:13 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 19 19:53:13 compute-0 sudo[55631]: pam_unix(sudo:session): session closed for user root
Feb 19 19:53:14 compute-0 python3.9[55790]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 19 19:53:15 compute-0 sudo[55946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yveuocapoyuayseiczsqxkcriuxvjrqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530794.9824302-117-281161327144448/AnsiballZ_dnf.py'
Feb 19 19:53:15 compute-0 dbus-broker-launch[789]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Feb 19 19:53:15 compute-0 sudo[55946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:53:15 compute-0 python3.9[55949]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 19 19:53:16 compute-0 sudo[55946]: pam_unix(sudo:session): session closed for user root
Feb 19 19:53:17 compute-0 sudo[56100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atmhkqwnosnfhqlncikxpsuopsmosjlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530796.868052-125-133215224683641/AnsiballZ_command.py'
Feb 19 19:53:17 compute-0 sudo[56100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:53:17 compute-0 python3.9[56103]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:53:18 compute-0 sudo[56100]: pam_unix(sudo:session): session closed for user root
Feb 19 19:53:18 compute-0 sudo[56388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehxhtltjrtvrieijgmfsgbehydncwvew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530798.163333-133-238281785900790/AnsiballZ_file.py'
Feb 19 19:53:18 compute-0 sudo[56388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:53:18 compute-0 python3.9[56391]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Feb 19 19:53:18 compute-0 sudo[56388]: pam_unix(sudo:session): session closed for user root
Feb 19 19:53:19 compute-0 python3.9[56541]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 19:53:19 compute-0 sudo[56693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymybehoqbhhflphowyejphpvmndiufiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530799.6141832-149-160607407324413/AnsiballZ_dnf.py'
Feb 19 19:53:19 compute-0 sudo[56693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:53:20 compute-0 python3.9[56696]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 19 19:53:21 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 19 19:53:21 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 19 19:53:21 compute-0 systemd[1]: Reloading.
Feb 19 19:53:21 compute-0 systemd-rc-local-generator[56734]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 19:53:21 compute-0 systemd-sysv-generator[56737]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 19:53:22 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 19 19:53:22 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 19 19:53:22 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 19 19:53:22 compute-0 systemd[1]: run-r424e0a794201433bb0d4a64c5333b5d4.service: Deactivated successfully.
Feb 19 19:53:22 compute-0 sudo[56693]: pam_unix(sudo:session): session closed for user root
Feb 19 19:53:22 compute-0 sudo[57017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srwrjzanzesuxjurrhjgqtofnydjgjzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530802.4930182-157-260650091724318/AnsiballZ_systemd.py'
Feb 19 19:53:22 compute-0 sudo[57017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:53:22 compute-0 python3.9[57020]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 19 19:53:23 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Feb 19 19:53:23 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Feb 19 19:53:23 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Feb 19 19:53:23 compute-0 systemd[1]: Stopping Network Manager...
Feb 19 19:53:23 compute-0 NetworkManager[7688]: <info>  [1771530803.0155] caught SIGTERM, shutting down normally.
Feb 19 19:53:23 compute-0 NetworkManager[7688]: <info>  [1771530803.0166] dhcp4 (eth0): canceled DHCP transaction
Feb 19 19:53:23 compute-0 NetworkManager[7688]: <info>  [1771530803.0166] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 19 19:53:23 compute-0 NetworkManager[7688]: <info>  [1771530803.0166] dhcp4 (eth0): state changed no lease
Feb 19 19:53:23 compute-0 NetworkManager[7688]: <info>  [1771530803.0168] manager: NetworkManager state is now CONNECTED_SITE
Feb 19 19:53:23 compute-0 NetworkManager[7688]: <info>  [1771530803.0213] exiting (success)
Feb 19 19:53:23 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 19 19:53:23 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 19 19:53:23 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Feb 19 19:53:23 compute-0 systemd[1]: Stopped Network Manager.
Feb 19 19:53:23 compute-0 systemd[1]: NetworkManager.service: Consumed 14.619s CPU time, 4.1M memory peak, read 0B from disk, written 39.5K to disk.
Feb 19 19:53:23 compute-0 systemd[1]: Starting Network Manager...
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.0704] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:c101bf30-d7ef-4612-9fa1-9cb228425d0e)
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.0706] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.0749] manager[0x55eb8b6d0000]: monitoring kernel firmware directory '/lib/firmware'.
Feb 19 19:53:23 compute-0 systemd[1]: Starting Hostname Service...
Feb 19 19:53:23 compute-0 systemd[1]: Started Hostname Service.
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1381] hostname: hostname: using hostnamed
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1382] hostname: static hostname changed from (none) to "compute-0"
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1386] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1389] manager[0x55eb8b6d0000]: rfkill: Wi-Fi hardware radio set enabled
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1389] manager[0x55eb8b6d0000]: rfkill: WWAN hardware radio set enabled
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1405] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1413] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1413] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1413] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1414] manager: Networking is enabled by state file
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1415] settings: Loaded settings plugin: keyfile (internal)
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1417] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1436] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1443] dhcp: init: Using DHCP client 'internal'
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1445] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1448] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1451] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1456] device (lo): Activation: starting connection 'lo' (8f9ecf5e-4818-46e5-a2b3-372e6bc78723)
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1460] device (eth0): carrier: link connected
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1463] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1467] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1467] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1471] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1476] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1480] device (eth1): carrier: link connected
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1484] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1487] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (c06ca1cd-73d8-5811-b2c8-fc60202bb10e) (indicated)
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1487] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1491] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1497] device (eth1): Activation: starting connection 'ci-private-network' (c06ca1cd-73d8-5811-b2c8-fc60202bb10e)
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1501] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Feb 19 19:53:23 compute-0 systemd[1]: Started Network Manager.
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1830] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1835] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1837] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1838] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1840] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1843] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1844] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1848] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1852] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1854] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1861] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1871] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1880] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1883] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1888] device (lo): Activation: successful, device activated.
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1896] dhcp4 (eth0): state changed new lease, address=38.102.83.220
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1902] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Feb 19 19:53:23 compute-0 systemd[1]: Starting Network Manager Wait Online...
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1957] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1960] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1967] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1969] manager: NetworkManager state is now CONNECTED_LOCAL
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1972] device (eth1): Activation: successful, device activated.
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1980] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1982] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1985] manager: NetworkManager state is now CONNECTED_SITE
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1987] device (eth0): Activation: successful, device activated.
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1991] manager: NetworkManager state is now CONNECTED_GLOBAL
Feb 19 19:53:23 compute-0 NetworkManager[57033]: <info>  [1771530803.1993] manager: startup complete
Feb 19 19:53:23 compute-0 sudo[57017]: pam_unix(sudo:session): session closed for user root
Feb 19 19:53:23 compute-0 systemd[1]: Finished Network Manager Wait Online.
Feb 19 19:53:23 compute-0 sudo[57244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqrqakcvtwmqtgsghccndkcefaeojnea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530803.3432977-165-85929007485315/AnsiballZ_dnf.py'
Feb 19 19:53:23 compute-0 sudo[57244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:53:23 compute-0 python3.9[57247]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 19 19:53:28 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 19 19:53:28 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 19 19:53:28 compute-0 systemd[1]: Reloading.
Feb 19 19:53:28 compute-0 systemd-rc-local-generator[57300]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 19:53:28 compute-0 systemd-sysv-generator[57303]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 19:53:28 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 19 19:53:29 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 19 19:53:29 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 19 19:53:29 compute-0 systemd[1]: run-rf85d90dc27424093ad27cb5216775ad1.service: Deactivated successfully.
Feb 19 19:53:29 compute-0 sudo[57244]: pam_unix(sudo:session): session closed for user root
Feb 19 19:53:29 compute-0 sudo[57721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipxueqvlfccshxhdyxigklseuydanwru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530809.765891-177-12531610910201/AnsiballZ_stat.py'
Feb 19 19:53:29 compute-0 sudo[57721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:53:30 compute-0 python3.9[57724]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 19:53:30 compute-0 sudo[57721]: pam_unix(sudo:session): session closed for user root
Feb 19 19:53:30 compute-0 sudo[57874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwzmrzfkuzebmhwvitmabrzhrogcpupc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530810.2902246-186-269858609604636/AnsiballZ_ini_file.py'
Feb 19 19:53:30 compute-0 sudo[57874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:53:30 compute-0 python3.9[57877]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:53:30 compute-0 sudo[57874]: pam_unix(sudo:session): session closed for user root
Feb 19 19:53:31 compute-0 sudo[58029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ledwlkyvxjtuzgixnvntxcyduxmjpkef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530811.105692-196-57933074133969/AnsiballZ_ini_file.py'
Feb 19 19:53:31 compute-0 sudo[58029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:53:31 compute-0 python3.9[58032]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:53:31 compute-0 sudo[58029]: pam_unix(sudo:session): session closed for user root
Feb 19 19:53:31 compute-0 sudo[58182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlqkbtzdmzsubfpovhasrmlcnbdeobpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530811.6339478-196-33462439549007/AnsiballZ_ini_file.py'
Feb 19 19:53:31 compute-0 sudo[58182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:53:32 compute-0 python3.9[58185]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:53:32 compute-0 sudo[58182]: pam_unix(sudo:session): session closed for user root
Feb 19 19:53:32 compute-0 sudo[58335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkysxaempnotnvqgabrzaxnrynlsdcab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530812.1559596-211-38794434469957/AnsiballZ_ini_file.py'
Feb 19 19:53:32 compute-0 sudo[58335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:53:32 compute-0 python3.9[58338]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:53:32 compute-0 sudo[58335]: pam_unix(sudo:session): session closed for user root
Feb 19 19:53:32 compute-0 sudo[58488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crxcgzkoyhgtjodqociyocwkwogodzri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530812.7043824-211-165954863470605/AnsiballZ_ini_file.py'
Feb 19 19:53:32 compute-0 sudo[58488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:53:33 compute-0 python3.9[58491]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:53:33 compute-0 sudo[58488]: pam_unix(sudo:session): session closed for user root
Feb 19 19:53:33 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 19 19:53:33 compute-0 sudo[58641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlbgowhngvcfbbnjyajqavbdoucmtlxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530813.2752993-226-117675025217515/AnsiballZ_stat.py'
Feb 19 19:53:33 compute-0 sudo[58641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:53:34 compute-0 python3.9[58644]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:53:34 compute-0 sudo[58641]: pam_unix(sudo:session): session closed for user root
Feb 19 19:53:34 compute-0 sudo[58765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfmvwmoxjpqncrouqzklcpgxfbjxakuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530813.2752993-226-117675025217515/AnsiballZ_copy.py'
Feb 19 19:53:34 compute-0 sudo[58765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:53:35 compute-0 python3.9[58768]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1771530813.2752993-226-117675025217515/.source _original_basename=.cyyphj1u follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:53:35 compute-0 sudo[58765]: pam_unix(sudo:session): session closed for user root
Feb 19 19:53:35 compute-0 sudo[58918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtkdnencjviswegnknzzqdnvprwiyfqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530815.1561792-241-183631171150104/AnsiballZ_file.py'
Feb 19 19:53:35 compute-0 sudo[58918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:53:35 compute-0 python3.9[58921]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:53:35 compute-0 sudo[58918]: pam_unix(sudo:session): session closed for user root
Feb 19 19:53:36 compute-0 sudo[59071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywmzcuuodecpoogifbkusghtowcuoyvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530815.7323909-249-180595690180386/AnsiballZ_edpm_os_net_config_mappings.py'
Feb 19 19:53:36 compute-0 sudo[59071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:53:36 compute-0 python3.9[59074]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Feb 19 19:53:36 compute-0 sudo[59071]: pam_unix(sudo:session): session closed for user root
Feb 19 19:53:36 compute-0 sudo[59226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vecqqsycvtfgmerplyzesmnvoomqityz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530816.3953428-258-48612304846413/AnsiballZ_file.py'
Feb 19 19:53:36 compute-0 sudo[59226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:53:36 compute-0 python3.9[59229]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:53:36 compute-0 sudo[59226]: pam_unix(sudo:session): session closed for user root
Feb 19 19:53:37 compute-0 sudo[59379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yslguonkxhvgiffaooxsdcjntuxtiayi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530817.0621214-268-239509738201207/AnsiballZ_stat.py'
Feb 19 19:53:37 compute-0 sudo[59379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:53:37 compute-0 sudo[59379]: pam_unix(sudo:session): session closed for user root
Feb 19 19:53:37 compute-0 sshd-session[59075]: Invalid user admin from 103.213.238.91 port 34368
Feb 19 19:53:37 compute-0 sudo[59503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jveqsoyywnreskvxurlajrlgacxvyjgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530817.0621214-268-239509738201207/AnsiballZ_copy.py'
Feb 19 19:53:37 compute-0 sudo[59503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:53:37 compute-0 sudo[59503]: pam_unix(sudo:session): session closed for user root
Feb 19 19:53:37 compute-0 sshd-session[59075]: Received disconnect from 103.213.238.91 port 34368:11: Bye Bye [preauth]
Feb 19 19:53:37 compute-0 sshd-session[59075]: Disconnected from invalid user admin 103.213.238.91 port 34368 [preauth]
Feb 19 19:53:38 compute-0 sudo[59656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiwjpnwcpcwcupasgssgudgzfvvfrwqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530818.0429626-283-237825154448413/AnsiballZ_slurp.py'
Feb 19 19:53:38 compute-0 sudo[59656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:53:38 compute-0 python3.9[59659]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Feb 19 19:53:38 compute-0 sudo[59656]: pam_unix(sudo:session): session closed for user root
Feb 19 19:53:39 compute-0 sudo[59832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwoxyuxkfibglkynmqacwxitogmlwrdv ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530818.8412447-292-70312181794351/async_wrapper.py j384969655746 300 /home/zuul/.ansible/tmp/ansible-tmp-1771530818.8412447-292-70312181794351/AnsiballZ_edpm_os_net_config.py _'
Feb 19 19:53:39 compute-0 sudo[59832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:53:39 compute-0 ansible-async_wrapper.py[59835]: Invoked with j384969655746 300 /home/zuul/.ansible/tmp/ansible-tmp-1771530818.8412447-292-70312181794351/AnsiballZ_edpm_os_net_config.py _
Feb 19 19:53:39 compute-0 ansible-async_wrapper.py[59838]: Starting module and watcher
Feb 19 19:53:39 compute-0 ansible-async_wrapper.py[59838]: Start watching 59839 (300)
Feb 19 19:53:39 compute-0 ansible-async_wrapper.py[59839]: Start module (59839)
Feb 19 19:53:39 compute-0 ansible-async_wrapper.py[59835]: Return async_wrapper task started.
Feb 19 19:53:39 compute-0 sudo[59832]: pam_unix(sudo:session): session closed for user root
Feb 19 19:53:39 compute-0 python3.9[59840]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True remove_config=False safe_defaults=False use_nmstate=True purge_provider=
Feb 19 19:53:40 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Feb 19 19:53:40 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Feb 19 19:53:40 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Feb 19 19:53:40 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Feb 19 19:53:40 compute-0 kernel: cfg80211: failed to load regulatory.db
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.2409] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59841 uid=0 result="success"
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.2426] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59841 uid=0 result="success"
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3056] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3058] audit: op="connection-add" uuid="310e0b25-0705-4504-a49d-63cd7e53d689" name="br-ex-br" pid=59841 uid=0 result="success"
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3074] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3075] audit: op="connection-add" uuid="2041996d-0340-4e03-a797-2bde5fdf4605" name="br-ex-port" pid=59841 uid=0 result="success"
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3088] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3089] audit: op="connection-add" uuid="10cb4fb7-29a7-4ecd-9a6f-af8300345f3b" name="eth1-port" pid=59841 uid=0 result="success"
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3105] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3107] audit: op="connection-add" uuid="80a7c114-bc4c-4e6c-8cba-0bed11c71c43" name="vlan20-port" pid=59841 uid=0 result="success"
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3126] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3129] audit: op="connection-add" uuid="c697442d-b654-4c6a-b997-63331833e2fd" name="vlan21-port" pid=59841 uid=0 result="success"
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3143] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3145] audit: op="connection-add" uuid="15f72b88-d459-47b8-a1fd-493de4cd9e77" name="vlan22-port" pid=59841 uid=0 result="success"
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3169] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv6.method,ipv6.addr-gen-mode,ipv6.dhcp-timeout,connection.timestamp,connection.autoconnect-priority,802-3-ethernet.mtu,ipv4.dhcp-timeout,ipv4.dhcp-client-id" pid=59841 uid=0 result="success"
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3188] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/10)
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3191] audit: op="connection-add" uuid="4b56ed46-ebe9-464a-bfa8-c307c0fdd467" name="br-ex-if" pid=59841 uid=0 result="success"
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3564] audit: op="connection-update" uuid="c06ca1cd-73d8-5811-b2c8-fc60202bb10e" name="ci-private-network" args="ipv6.method,ipv6.dns,ipv6.addr-gen-mode,ipv6.addresses,ipv6.routes,ipv6.routing-rules,connection.controller,connection.port-type,connection.master,connection.slave-type,connection.timestamp,ovs-external-ids.data,ipv4.method,ipv4.dns,ipv4.never-default,ipv4.addresses,ipv4.routes,ipv4.routing-rules,ovs-interface.type" pid=59841 uid=0 result="success"
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3582] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3584] audit: op="connection-add" uuid="64df348f-7dcc-4aa2-b1c6-d2bbfb27787b" name="vlan20-if" pid=59841 uid=0 result="success"
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3598] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3600] audit: op="connection-add" uuid="46c62210-8cef-49da-9cba-7210da051509" name="vlan21-if" pid=59841 uid=0 result="success"
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3618] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3619] audit: op="connection-add" uuid="ed1733bc-9ea8-442b-832e-6f3634b4c660" name="vlan22-if" pid=59841 uid=0 result="success"
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3631] audit: op="connection-delete" uuid="5ffa268c-a4ee-37a5-8060-0039bff52aa4" name="Wired connection 1" pid=59841 uid=0 result="success"
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3642] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <warn>  [1771530821.3644] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3649] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3652] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (310e0b25-0705-4504-a49d-63cd7e53d689)
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3653] audit: op="connection-activate" uuid="310e0b25-0705-4504-a49d-63cd7e53d689" name="br-ex-br" pid=59841 uid=0 result="success"
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3654] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <warn>  [1771530821.3655] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3659] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3663] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (2041996d-0340-4e03-a797-2bde5fdf4605)
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3664] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <warn>  [1771530821.3665] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3669] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3673] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (10cb4fb7-29a7-4ecd-9a6f-af8300345f3b)
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3674] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <warn>  [1771530821.3675] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3680] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3685] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (80a7c114-bc4c-4e6c-8cba-0bed11c71c43)
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3687] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <warn>  [1771530821.3688] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3693] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3698] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (c697442d-b654-4c6a-b997-63331833e2fd)
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3699] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <warn>  [1771530821.3700] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3706] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3710] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (15f72b88-d459-47b8-a1fd-493de4cd9e77)
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3711] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3714] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3716] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3723] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <warn>  [1771530821.3724] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3727] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3732] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (4b56ed46-ebe9-464a-bfa8-c307c0fdd467)
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3733] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3737] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3738] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3739] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3740] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3752] device (eth1): disconnecting for new activation request.
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3752] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3756] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3758] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3759] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3762] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <warn>  [1771530821.3763] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3766] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3770] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (64df348f-7dcc-4aa2-b1c6-d2bbfb27787b)
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3770] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3773] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3776] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3777] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3780] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <warn>  [1771530821.3781] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3784] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3789] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (46c62210-8cef-49da-9cba-7210da051509)
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3790] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3793] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3795] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3797] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3800] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <warn>  [1771530821.3802] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3805] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3809] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (ed1733bc-9ea8-442b-832e-6f3634b4c660)
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3810] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3814] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3816] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3818] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3820] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3832] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv6.method,ipv6.addr-gen-mode,connection.autoconnect-priority,802-3-ethernet.mtu,ipv4.dhcp-timeout,ipv4.dhcp-client-id" pid=59841 uid=0 result="success"
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3834] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3837] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3839] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3845] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3847] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3851] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3854] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3855] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3859] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 kernel: ovs-system: entered promiscuous mode
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3863] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3867] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3868] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3873] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3877] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 systemd-udevd[59844]: Network interface NamePolicy= disabled on kernel command line.
Feb 19 19:53:41 compute-0 kernel: Timeout policy base is empty
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3880] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3883] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3887] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3893] dhcp4 (eth0): canceled DHCP transaction
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3893] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3893] dhcp4 (eth0): state changed no lease
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3895] dhcp4 (eth0): activation: beginning transaction (no timeout)
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3903] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3908] audit: op="device-reapply" interface="eth1" ifindex=3 pid=59841 uid=0 result="fail" reason="Device is not activated"
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.3913] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Feb 19 19:53:41 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4006] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4010] dhcp4 (eth0): state changed new lease, address=38.102.83.220
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4017] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4112] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Feb 19 19:53:41 compute-0 kernel: br-ex: entered promiscuous mode
Feb 19 19:53:41 compute-0 kernel: vlan22: entered promiscuous mode
Feb 19 19:53:41 compute-0 systemd-udevd[59846]: Network interface NamePolicy= disabled on kernel command line.
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4227] device (eth1): Activation: starting connection 'ci-private-network' (c06ca1cd-73d8-5811-b2c8-fc60202bb10e)
Feb 19 19:53:41 compute-0 kernel: vlan20: entered promiscuous mode
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4233] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4235] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4236] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4238] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4240] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4242] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4244] device (eth1): state change: disconnected -> deactivating (reason 'new-activation', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4252] device (eth1): disconnecting for new activation request.
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4253] audit: op="connection-activate" uuid="c06ca1cd-73d8-5811-b2c8-fc60202bb10e" name="ci-private-network" pid=59841 uid=0 result="success"
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4266] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4268] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4273] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4278] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4283] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4286] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4291] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4296] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4300] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4304] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4308] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4312] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4316] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Feb 19 19:53:41 compute-0 kernel: vlan21: entered promiscuous mode
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4329] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4336] device (eth1): Activation: starting connection 'ci-private-network' (c06ca1cd-73d8-5811-b2c8-fc60202bb10e)
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4358] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4362] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4370] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4370] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59841 uid=0 result="success"
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4373] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4380] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4397] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4399] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4415] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4421] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4430] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4440] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4440] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4443] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4446] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4458] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4465] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4466] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4468] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4469] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4474] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4479] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4482] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4487] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4491] device (eth1): Activation: successful, device activated.
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4640] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4641] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 19 19:53:41 compute-0 NetworkManager[57033]: <info>  [1771530821.4644] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Feb 19 19:53:42 compute-0 NetworkManager[57033]: <info>  [1771530822.6062] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59841 uid=0 result="success"
Feb 19 19:53:42 compute-0 NetworkManager[57033]: <info>  [1771530822.7481] checkpoint[0x55eb8b6a6950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Feb 19 19:53:42 compute-0 NetworkManager[57033]: <info>  [1771530822.7483] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59841 uid=0 result="success"
Feb 19 19:53:43 compute-0 NetworkManager[57033]: <info>  [1771530823.0428] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59841 uid=0 result="success"
Feb 19 19:53:43 compute-0 NetworkManager[57033]: <info>  [1771530823.0439] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59841 uid=0 result="success"
Feb 19 19:53:43 compute-0 sudo[60178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywgqppyjpyeupeurmqsimmdxotxeahhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530822.6977272-292-264460968331130/AnsiballZ_async_status.py'
Feb 19 19:53:43 compute-0 sudo[60178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:53:43 compute-0 NetworkManager[57033]: <info>  [1771530823.2377] audit: op="networking-control" arg="global-dns-configuration" pid=59841 uid=0 result="success"
Feb 19 19:53:43 compute-0 NetworkManager[57033]: <info>  [1771530823.2794] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Feb 19 19:53:43 compute-0 python3.9[60181]: ansible-ansible.legacy.async_status Invoked with jid=j384969655746.59835 mode=status _async_dir=/root/.ansible_async
Feb 19 19:53:43 compute-0 NetworkManager[57033]: <info>  [1771530823.2963] audit: op="networking-control" arg="global-dns-configuration" pid=59841 uid=0 result="success"
Feb 19 19:53:43 compute-0 NetworkManager[57033]: <info>  [1771530823.2987] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59841 uid=0 result="success"
Feb 19 19:53:43 compute-0 sudo[60178]: pam_unix(sudo:session): session closed for user root
Feb 19 19:53:43 compute-0 NetworkManager[57033]: <info>  [1771530823.4139] checkpoint[0x55eb8b6a6a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Feb 19 19:53:43 compute-0 NetworkManager[57033]: <info>  [1771530823.4143] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59841 uid=0 result="success"
Feb 19 19:53:43 compute-0 ansible-async_wrapper.py[59839]: Module complete (59839)
Feb 19 19:53:44 compute-0 ansible-async_wrapper.py[59838]: Done in kid B.
Feb 19 19:53:46 compute-0 sudo[60283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlhkrelxspzxiuhoeusmcnbsqfzgjpis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530822.6977272-292-264460968331130/AnsiballZ_async_status.py'
Feb 19 19:53:46 compute-0 sudo[60283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:53:46 compute-0 python3.9[60286]: ansible-ansible.legacy.async_status Invoked with jid=j384969655746.59835 mode=status _async_dir=/root/.ansible_async
Feb 19 19:53:46 compute-0 sudo[60283]: pam_unix(sudo:session): session closed for user root
Feb 19 19:53:46 compute-0 sudo[60384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lthpbqahapvyukotudhehwzdjwwdxtng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530822.6977272-292-264460968331130/AnsiballZ_async_status.py'
Feb 19 19:53:46 compute-0 sudo[60384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:53:47 compute-0 python3.9[60387]: ansible-ansible.legacy.async_status Invoked with jid=j384969655746.59835 mode=cleanup _async_dir=/root/.ansible_async
Feb 19 19:53:47 compute-0 sudo[60384]: pam_unix(sudo:session): session closed for user root
Feb 19 19:53:47 compute-0 sudo[60537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zypcihqdjrkktzrcnrmsvlvfkfdipohb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530827.2853858-319-256868149103483/AnsiballZ_stat.py'
Feb 19 19:53:47 compute-0 sudo[60537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:53:47 compute-0 python3.9[60540]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:53:47 compute-0 sudo[60537]: pam_unix(sudo:session): session closed for user root
Feb 19 19:53:47 compute-0 sudo[60661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xiccjvcjvgizviljfwtjwynwfiaczdln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530827.2853858-319-256868149103483/AnsiballZ_copy.py'
Feb 19 19:53:47 compute-0 sudo[60661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:53:48 compute-0 python3.9[60664]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771530827.2853858-319-256868149103483/.source.returncode _original_basename=.9q20xl8y follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:53:48 compute-0 sudo[60661]: pam_unix(sudo:session): session closed for user root
Feb 19 19:53:48 compute-0 sudo[60814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpeuamnbqdepcfzanricvymnwjxmnuuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530828.329173-335-141654495473654/AnsiballZ_stat.py'
Feb 19 19:53:48 compute-0 sudo[60814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:53:48 compute-0 python3.9[60817]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:53:48 compute-0 sudo[60814]: pam_unix(sudo:session): session closed for user root
Feb 19 19:53:48 compute-0 sudo[60938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awogjlybcgpefednxrckoazogkekfbwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530828.329173-335-141654495473654/AnsiballZ_copy.py'
Feb 19 19:53:48 compute-0 sudo[60938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:53:49 compute-0 python3.9[60941]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771530828.329173-335-141654495473654/.source.cfg _original_basename=.4orsyhq3 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:53:49 compute-0 sudo[60938]: pam_unix(sudo:session): session closed for user root
Feb 19 19:53:49 compute-0 sudo[61091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atuzuhqjsqiynvnkhzyddkxeobxyyedi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530829.3262222-350-76628152634848/AnsiballZ_systemd.py'
Feb 19 19:53:49 compute-0 sudo[61091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:53:50 compute-0 python3.9[61094]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 19 19:53:50 compute-0 systemd[1]: Reloading Network Manager...
Feb 19 19:53:50 compute-0 NetworkManager[57033]: <info>  [1771530830.0706] audit: op="reload" arg="0" pid=61099 uid=0 result="success"
Feb 19 19:53:50 compute-0 NetworkManager[57033]: <info>  [1771530830.0712] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Feb 19 19:53:50 compute-0 systemd[1]: Reloaded Network Manager.
Feb 19 19:53:50 compute-0 sudo[61091]: pam_unix(sudo:session): session closed for user root
Feb 19 19:53:50 compute-0 sshd-session[52982]: Connection closed by 192.168.122.30 port 43216
Feb 19 19:53:50 compute-0 sshd-session[52979]: pam_unix(sshd:session): session closed for user zuul
Feb 19 19:53:50 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Feb 19 19:53:50 compute-0 systemd[1]: session-11.scope: Consumed 43.024s CPU time.
Feb 19 19:53:50 compute-0 systemd-logind[810]: Session 11 logged out. Waiting for processes to exit.
Feb 19 19:53:50 compute-0 systemd-logind[810]: Removed session 11.
Feb 19 19:53:53 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 19 19:53:55 compute-0 sshd-session[61131]: Accepted publickey for zuul from 192.168.122.30 port 52052 ssh2: ECDSA SHA256:U7+XUhHIIKxaxeCtrtx4n7poU9CMVA2TmDaaiHbw4x0
Feb 19 19:53:55 compute-0 systemd-logind[810]: New session 12 of user zuul.
Feb 19 19:53:55 compute-0 systemd[1]: Started Session 12 of User zuul.
Feb 19 19:53:55 compute-0 sshd-session[61131]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 19 19:53:56 compute-0 python3.9[61284]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 19 19:53:57 compute-0 python3.9[61439]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 19 19:53:58 compute-0 python3.9[61630]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:53:58 compute-0 sshd-session[61134]: Connection closed by 192.168.122.30 port 52052
Feb 19 19:53:58 compute-0 sshd-session[61131]: pam_unix(sshd:session): session closed for user zuul
Feb 19 19:53:58 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Feb 19 19:53:58 compute-0 systemd[1]: session-12.scope: Consumed 1.903s CPU time.
Feb 19 19:53:58 compute-0 systemd-logind[810]: Session 12 logged out. Waiting for processes to exit.
Feb 19 19:53:58 compute-0 systemd-logind[810]: Removed session 12.
Feb 19 19:53:59 compute-0 sshd-session[61479]: Invalid user n8n from 103.154.77.48 port 43098
Feb 19 19:53:59 compute-0 sshd-session[61479]: Received disconnect from 103.154.77.48 port 43098:11: Bye Bye [preauth]
Feb 19 19:53:59 compute-0 sshd-session[61479]: Disconnected from invalid user n8n 103.154.77.48 port 43098 [preauth]
Feb 19 19:54:00 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 19 19:54:04 compute-0 sshd-session[61659]: Accepted publickey for zuul from 192.168.122.30 port 35964 ssh2: ECDSA SHA256:U7+XUhHIIKxaxeCtrtx4n7poU9CMVA2TmDaaiHbw4x0
Feb 19 19:54:04 compute-0 systemd-logind[810]: New session 13 of user zuul.
Feb 19 19:54:04 compute-0 systemd[1]: Started Session 13 of User zuul.
Feb 19 19:54:04 compute-0 sshd-session[61659]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 19 19:54:05 compute-0 python3.9[61812]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 19 19:54:06 compute-0 python3.9[61966]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 19 19:54:06 compute-0 sudo[62121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnnrbgitbaglrfvhnualzoqqnseaieqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530846.4706557-35-250503361042191/AnsiballZ_setup.py'
Feb 19 19:54:06 compute-0 sudo[62121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:06 compute-0 python3.9[62124]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 19 19:54:07 compute-0 sudo[62121]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:07 compute-0 sudo[62206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmpsarlmygxhlgzvtwkupesyjfuswtou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530846.4706557-35-250503361042191/AnsiballZ_dnf.py'
Feb 19 19:54:07 compute-0 sudo[62206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:07 compute-0 python3.9[62209]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 19 19:54:09 compute-0 sudo[62206]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:09 compute-0 sudo[62361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfecyxmafqvomxavygdpkmhambnlntyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530849.2033346-47-257847403692723/AnsiballZ_setup.py'
Feb 19 19:54:09 compute-0 sudo[62361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:09 compute-0 python3.9[62364]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 19 19:54:09 compute-0 sudo[62361]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:10 compute-0 sudo[62553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejmntwfjgngukgkwylrcjolgzvkhzxom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530850.0766761-58-225596070281816/AnsiballZ_file.py'
Feb 19 19:54:10 compute-0 sudo[62553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:10 compute-0 python3.9[62556]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:54:10 compute-0 sudo[62553]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:11 compute-0 sudo[62706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhzsmymcumnbussmnvnfsbkicnqevvcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530850.7682695-66-65607968846894/AnsiballZ_command.py'
Feb 19 19:54:11 compute-0 sudo[62706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:11 compute-0 python3.9[62709]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:54:11 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 19 19:54:11 compute-0 sudo[62706]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:12 compute-0 sudo[62871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcqderzllcesyjeauwoibanwwgqdbcuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530851.537529-74-181392258772656/AnsiballZ_stat.py'
Feb 19 19:54:12 compute-0 sudo[62871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:12 compute-0 python3.9[62874]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:54:12 compute-0 sudo[62871]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:12 compute-0 sudo[62950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvaqshapzmdqhjwbuoscmxugtcnxbqiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530851.537529-74-181392258772656/AnsiballZ_file.py'
Feb 19 19:54:12 compute-0 sudo[62950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:12 compute-0 python3.9[62953]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:54:12 compute-0 sudo[62950]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:12 compute-0 sudo[63104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fujjwpgolnlpdzbitornhakwpkkdpihr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530852.736922-86-160890771278652/AnsiballZ_stat.py'
Feb 19 19:54:12 compute-0 sudo[63104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:13 compute-0 python3.9[63107]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:54:13 compute-0 sudo[63104]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:13 compute-0 sudo[63183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thfylgstumdqkrbgzzfyxryqeffsivym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530852.736922-86-160890771278652/AnsiballZ_file.py'
Feb 19 19:54:13 compute-0 sudo[63183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:13 compute-0 python3.9[63186]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:54:13 compute-0 sudo[63183]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:14 compute-0 sudo[63336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdocylzmhhrnxazuohciaurlpwyirpws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530853.6929216-99-148830834949587/AnsiballZ_ini_file.py'
Feb 19 19:54:14 compute-0 sudo[63336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:14 compute-0 python3.9[63339]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:54:14 compute-0 sudo[63336]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:15 compute-0 sudo[63489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxenlpiapaqudxcylpqlmgmhpyfneyds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530855.594264-99-225315822045067/AnsiballZ_ini_file.py'
Feb 19 19:54:15 compute-0 sudo[63489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:15 compute-0 python3.9[63492]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:54:16 compute-0 sudo[63489]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:16 compute-0 sudo[63644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xinnwkrstwciozntsgjoyzwikoqxnpqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530856.1018848-99-8680618541006/AnsiballZ_ini_file.py'
Feb 19 19:54:16 compute-0 sudo[63644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:16 compute-0 python3.9[63647]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:54:16 compute-0 sudo[63644]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:17 compute-0 sshd-session[63592]: Invalid user claude from 158.174.210.161 port 32725
Feb 19 19:54:17 compute-0 sudo[63797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmkerqxtknnahubnjfrxhvzgeucgchxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530857.1664512-99-76941564585387/AnsiballZ_ini_file.py'
Feb 19 19:54:17 compute-0 sudo[63797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:17 compute-0 python3.9[63800]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:54:17 compute-0 sudo[63797]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:18 compute-0 sshd-session[63592]: Received disconnect from 158.174.210.161 port 32725:11: Bye Bye [preauth]
Feb 19 19:54:18 compute-0 sshd-session[63592]: Disconnected from invalid user claude 158.174.210.161 port 32725 [preauth]
Feb 19 19:54:18 compute-0 sudo[63950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hoirxnbwaaqaxtjfhklfbgmcncrcqcqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530858.0638537-130-26555218331159/AnsiballZ_dnf.py'
Feb 19 19:54:18 compute-0 sudo[63950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:18 compute-0 python3.9[63953]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 19 19:54:19 compute-0 sudo[63950]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:20 compute-0 sudo[64104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teqgpxihoetnrkzcpfvmkvfgsjttsyly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530860.2119803-141-236423199550833/AnsiballZ_setup.py'
Feb 19 19:54:20 compute-0 sudo[64104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:20 compute-0 python3.9[64107]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 19 19:54:20 compute-0 sudo[64104]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:21 compute-0 sudo[64259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwjxtgpmetaubcnpfcyljrfpllesbats ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530861.1208353-149-177182557020521/AnsiballZ_stat.py'
Feb 19 19:54:21 compute-0 sudo[64259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:21 compute-0 python3.9[64262]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 19:54:21 compute-0 sudo[64259]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:21 compute-0 sudo[64412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlhlaeebvxyneiatgfddunsfewdwvaud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530861.7194164-158-262714693441229/AnsiballZ_stat.py'
Feb 19 19:54:21 compute-0 sudo[64412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:22 compute-0 python3.9[64415]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 19:54:22 compute-0 sudo[64412]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:22 compute-0 sudo[64565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icnobjtscathycszntggjasdvqdzhjgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530862.3361125-168-280671745742409/AnsiballZ_command.py'
Feb 19 19:54:22 compute-0 sudo[64565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:22 compute-0 python3.9[64568]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:54:22 compute-0 sudo[64565]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:23 compute-0 sudo[64719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bphmzeycgzggpkyilflsueoommeactxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530862.9387925-178-91180361305525/AnsiballZ_service_facts.py'
Feb 19 19:54:23 compute-0 sudo[64719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:23 compute-0 python3.9[64722]: ansible-service_facts Invoked
Feb 19 19:54:23 compute-0 network[64739]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 19 19:54:23 compute-0 network[64740]: 'network-scripts' will be removed from distribution in near future.
Feb 19 19:54:23 compute-0 network[64741]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 19 19:54:25 compute-0 sudo[64719]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:26 compute-0 sudo[65025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaefdpdlhalashlyihpfcpytlspgxowz ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1771530865.8948371-193-169472861617135/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1771530865.8948371-193-169472861617135/args'
Feb 19 19:54:26 compute-0 sudo[65025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:26 compute-0 sudo[65025]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:26 compute-0 sudo[65193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufmkduszcqlrhxwnvajnkyzkddegzsxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530866.4943352-204-92959618776906/AnsiballZ_dnf.py'
Feb 19 19:54:26 compute-0 sudo[65193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:26 compute-0 python3.9[65196]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 19 19:54:28 compute-0 sudo[65193]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:29 compute-0 sudo[65347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzzmvdunuadmugioomfjhfetmanuaxcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530868.689627-217-23927560278591/AnsiballZ_package_facts.py'
Feb 19 19:54:29 compute-0 sudo[65347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:29 compute-0 python3.9[65350]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Feb 19 19:54:29 compute-0 sudo[65347]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:30 compute-0 sudo[65500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdefijgkptneoiekayxdbfcvpzprcqvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530870.2134595-227-136248875233326/AnsiballZ_stat.py'
Feb 19 19:54:30 compute-0 sudo[65500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:30 compute-0 python3.9[65503]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:54:30 compute-0 sudo[65500]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:31 compute-0 sudo[65626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjdbzqrnkpzaidyoxgpfrngtvvehgxil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530870.2134595-227-136248875233326/AnsiballZ_copy.py'
Feb 19 19:54:31 compute-0 sudo[65626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:31 compute-0 python3.9[65629]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771530870.2134595-227-136248875233326/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:54:31 compute-0 sudo[65626]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:31 compute-0 sudo[65781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmgwlqofhnzglmzwoovnxkpvqvulewlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530871.5705185-242-33007876644101/AnsiballZ_stat.py'
Feb 19 19:54:31 compute-0 sudo[65781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:31 compute-0 python3.9[65784]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:54:32 compute-0 sudo[65781]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:32 compute-0 sudo[65907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrealnyregzhrrqqwejetzdhvpsmefrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530871.5705185-242-33007876644101/AnsiballZ_copy.py'
Feb 19 19:54:32 compute-0 sudo[65907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:32 compute-0 python3.9[65910]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771530871.5705185-242-33007876644101/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:54:32 compute-0 sudo[65907]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:33 compute-0 sudo[66062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snvgaxrltxxapprilhfjbrqdiagczzui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530872.850997-263-83261897122000/AnsiballZ_lineinfile.py'
Feb 19 19:54:33 compute-0 sudo[66062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:33 compute-0 python3.9[66065]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:54:33 compute-0 sudo[66062]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:34 compute-0 sudo[66217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjbcsmkilvrpkkuqmjapjdkilbbkhnbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530874.02338-278-35408125948656/AnsiballZ_setup.py'
Feb 19 19:54:34 compute-0 sudo[66217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:34 compute-0 python3.9[66220]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 19 19:54:34 compute-0 sudo[66217]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:35 compute-0 sudo[66302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncewmpwvubgkzeqqckdwqgdckmtcwxor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530874.02338-278-35408125948656/AnsiballZ_systemd.py'
Feb 19 19:54:35 compute-0 sudo[66302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:35 compute-0 python3.9[66305]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 19:54:35 compute-0 sudo[66302]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:36 compute-0 sudo[66457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ablwcakmvcncpzfyiprifhlufcadigjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530875.9990532-294-6560664954104/AnsiballZ_setup.py'
Feb 19 19:54:36 compute-0 sudo[66457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:36 compute-0 python3.9[66460]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 19 19:54:36 compute-0 sudo[66457]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:36 compute-0 sudo[66542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvmjjudzzbzkbbpehmgcymizylfsrkss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530875.9990532-294-6560664954104/AnsiballZ_systemd.py'
Feb 19 19:54:36 compute-0 sudo[66542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:37 compute-0 python3.9[66545]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 19 19:54:37 compute-0 chronyd[804]: chronyd exiting
Feb 19 19:54:37 compute-0 systemd[1]: Stopping NTP client/server...
Feb 19 19:54:37 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Feb 19 19:54:37 compute-0 systemd[1]: Stopped NTP client/server.
Feb 19 19:54:37 compute-0 systemd[1]: Starting NTP client/server...
Feb 19 19:54:37 compute-0 chronyd[66553]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Feb 19 19:54:37 compute-0 chronyd[66553]: Frequency -31.465 +/- 0.151 ppm read from /var/lib/chrony/drift
Feb 19 19:54:37 compute-0 chronyd[66553]: Loaded seccomp filter (level 2)
Feb 19 19:54:37 compute-0 systemd[1]: Started NTP client/server.
Feb 19 19:54:37 compute-0 sudo[66542]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:37 compute-0 sshd-session[61662]: Connection closed by 192.168.122.30 port 35964
Feb 19 19:54:37 compute-0 sshd-session[61659]: pam_unix(sshd:session): session closed for user zuul
Feb 19 19:54:37 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Feb 19 19:54:37 compute-0 systemd[1]: session-13.scope: Consumed 21.075s CPU time.
Feb 19 19:54:37 compute-0 systemd-logind[810]: Session 13 logged out. Waiting for processes to exit.
Feb 19 19:54:37 compute-0 systemd-logind[810]: Removed session 13.
Feb 19 19:54:43 compute-0 sshd-session[66579]: Accepted publickey for zuul from 192.168.122.30 port 50826 ssh2: ECDSA SHA256:U7+XUhHIIKxaxeCtrtx4n7poU9CMVA2TmDaaiHbw4x0
Feb 19 19:54:43 compute-0 systemd-logind[810]: New session 14 of user zuul.
Feb 19 19:54:43 compute-0 systemd[1]: Started Session 14 of User zuul.
Feb 19 19:54:43 compute-0 sshd-session[66579]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 19 19:54:44 compute-0 python3.9[66732]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 19 19:54:44 compute-0 sudo[66886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxkwsinzatbftefcxobflrgrxfuxizix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530884.6438258-28-20796674665723/AnsiballZ_file.py'
Feb 19 19:54:44 compute-0 sudo[66886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:45 compute-0 python3.9[66889]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:54:45 compute-0 sudo[66886]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:45 compute-0 sudo[67062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfjgfxkbdvkeqtgzrccevfmqsqizqdlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530885.3681798-36-238480268592814/AnsiballZ_stat.py'
Feb 19 19:54:45 compute-0 sudo[67062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:45 compute-0 python3.9[67065]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:54:45 compute-0 sudo[67062]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:46 compute-0 sudo[67141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-redliwewjwjaqfquxurqtuhmzylcbvby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530885.3681798-36-238480268592814/AnsiballZ_file.py'
Feb 19 19:54:46 compute-0 sudo[67141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:46 compute-0 python3.9[67144]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.anfosjkb recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:54:46 compute-0 sudo[67141]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:46 compute-0 sudo[67294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iytejxvymirbibysztuvmzcsiealeemv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530886.606118-56-10675649631390/AnsiballZ_stat.py'
Feb 19 19:54:46 compute-0 sudo[67294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:46 compute-0 python3.9[67297]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:54:47 compute-0 sudo[67294]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:47 compute-0 sudo[67418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjacuralvzxmudhgsppbjzzdsidjfnhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530886.606118-56-10675649631390/AnsiballZ_copy.py'
Feb 19 19:54:47 compute-0 sudo[67418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:47 compute-0 python3.9[67421]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771530886.606118-56-10675649631390/.source _original_basename=.vortmmma follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:54:47 compute-0 sudo[67418]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:47 compute-0 sudo[67571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aefpbbczzvgphdkutoxjzwbymspqvntj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530887.7409935-72-213913654315400/AnsiballZ_file.py'
Feb 19 19:54:47 compute-0 sudo[67571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:48 compute-0 python3.9[67574]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:54:48 compute-0 sudo[67571]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:48 compute-0 sudo[67724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svqtvatvoknazomnouuxkcttuxpelweq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530888.3107877-80-147801772119709/AnsiballZ_stat.py'
Feb 19 19:54:48 compute-0 sudo[67724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:48 compute-0 python3.9[67727]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:54:48 compute-0 sudo[67724]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:48 compute-0 sudo[67848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spnfiqljaepwvrmlhuvuaqovxqtjrksg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530888.3107877-80-147801772119709/AnsiballZ_copy.py'
Feb 19 19:54:48 compute-0 sudo[67848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:49 compute-0 python3.9[67851]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771530888.3107877-80-147801772119709/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:54:49 compute-0 sudo[67848]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:49 compute-0 sudo[68001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pttjbwtclmyahutvmlmkcoitqkzagzxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530889.2944095-80-179133868782729/AnsiballZ_stat.py'
Feb 19 19:54:49 compute-0 sudo[68001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:49 compute-0 python3.9[68004]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:54:49 compute-0 sudo[68001]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:49 compute-0 sudo[68125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqjfacazqphlnyiqmkswdlmwsytmycom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530889.2944095-80-179133868782729/AnsiballZ_copy.py'
Feb 19 19:54:49 compute-0 sudo[68125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:50 compute-0 python3.9[68128]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771530889.2944095-80-179133868782729/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:54:50 compute-0 sudo[68125]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:50 compute-0 sudo[68278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwxmpzqrazvclkkdnflwwtgkwangexhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530890.3013778-109-40333944436924/AnsiballZ_file.py'
Feb 19 19:54:50 compute-0 sudo[68278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:50 compute-0 python3.9[68281]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:54:50 compute-0 sudo[68278]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:51 compute-0 sudo[68431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynrletithdjqnjeezfihnjpdqyqublgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530890.8235993-117-105300918674950/AnsiballZ_stat.py'
Feb 19 19:54:51 compute-0 sudo[68431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:51 compute-0 python3.9[68434]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:54:51 compute-0 sudo[68431]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:51 compute-0 sudo[68555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmmrpmfshdjuhfoflswycsqscvovnphi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530890.8235993-117-105300918674950/AnsiballZ_copy.py'
Feb 19 19:54:51 compute-0 sudo[68555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:51 compute-0 python3.9[68558]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771530890.8235993-117-105300918674950/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:54:51 compute-0 sudo[68555]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:52 compute-0 sudo[68708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfizogukssbgdkfkduocpusqivaebtaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530891.790601-132-40652586796286/AnsiballZ_stat.py'
Feb 19 19:54:52 compute-0 sudo[68708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:52 compute-0 python3.9[68711]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:54:52 compute-0 sudo[68708]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:52 compute-0 sudo[68832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdrgnzcfyzzegvyfxvjstmuuycsukzkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530891.790601-132-40652586796286/AnsiballZ_copy.py'
Feb 19 19:54:52 compute-0 sudo[68832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:52 compute-0 python3.9[68835]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771530891.790601-132-40652586796286/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:54:52 compute-0 sudo[68832]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:53 compute-0 sudo[68985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brxaknwabttjicnvoiranitwsamfzoyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530892.8514943-147-106890186196541/AnsiballZ_systemd.py'
Feb 19 19:54:53 compute-0 sudo[68985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:53 compute-0 python3.9[68988]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 19:54:53 compute-0 systemd[1]: Reloading.
Feb 19 19:54:53 compute-0 systemd-rc-local-generator[69013]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 19:54:53 compute-0 systemd-sysv-generator[69017]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 19:54:53 compute-0 systemd[1]: Reloading.
Feb 19 19:54:53 compute-0 systemd-sysv-generator[69065]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 19:54:53 compute-0 systemd-rc-local-generator[69060]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 19:54:53 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Feb 19 19:54:53 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Feb 19 19:54:54 compute-0 sudo[68985]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:54 compute-0 sudo[69228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibnmezzcensrhgjdsknccqlxcbewjndg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530894.1575572-155-243550656042806/AnsiballZ_stat.py'
Feb 19 19:54:54 compute-0 sudo[69228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:54 compute-0 python3.9[69231]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:54:54 compute-0 sudo[69228]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:54 compute-0 sudo[69352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhtyiolvmvukmwrawriwauctlwpacwuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530894.1575572-155-243550656042806/AnsiballZ_copy.py'
Feb 19 19:54:54 compute-0 sudo[69352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:54 compute-0 python3.9[69355]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771530894.1575572-155-243550656042806/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:54:54 compute-0 sudo[69352]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:55 compute-0 sudo[69505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjggpvchuxcahgnjychsstpswagclgcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530895.093933-170-1436265814268/AnsiballZ_stat.py'
Feb 19 19:54:55 compute-0 sudo[69505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:55 compute-0 python3.9[69508]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:54:55 compute-0 sudo[69505]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:55 compute-0 sudo[69629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqbvxqsowbeukhsdlvdnbugogtbkecub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530895.093933-170-1436265814268/AnsiballZ_copy.py'
Feb 19 19:54:55 compute-0 sudo[69629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:55 compute-0 python3.9[69632]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771530895.093933-170-1436265814268/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:54:55 compute-0 sudo[69629]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:56 compute-0 sudo[69782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxhrxoxqyrtaayakqkcmtetnfhhlojfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530896.0873358-185-68256128132656/AnsiballZ_systemd.py'
Feb 19 19:54:56 compute-0 sudo[69782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:54:56 compute-0 python3.9[69785]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 19:54:56 compute-0 systemd[1]: Reloading.
Feb 19 19:54:56 compute-0 systemd-rc-local-generator[69810]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 19:54:56 compute-0 systemd-sysv-generator[69815]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 19:54:56 compute-0 systemd[1]: Reloading.
Feb 19 19:54:56 compute-0 systemd-rc-local-generator[69852]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 19:54:56 compute-0 systemd-sysv-generator[69858]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 19:54:56 compute-0 systemd[1]: Starting Create netns directory...
Feb 19 19:54:57 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb 19 19:54:57 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb 19 19:54:57 compute-0 systemd[1]: Finished Create netns directory.
Feb 19 19:54:57 compute-0 sudo[69782]: pam_unix(sudo:session): session closed for user root
Feb 19 19:54:57 compute-0 python3.9[70024]: ansible-ansible.builtin.service_facts Invoked
Feb 19 19:54:57 compute-0 network[70041]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 19 19:54:57 compute-0 network[70042]: 'network-scripts' will be removed from distribution in near future.
Feb 19 19:54:57 compute-0 network[70043]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 19 19:55:00 compute-0 sudo[70304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgbwznfsqwhcjvffjztpybutyarhacsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530900.714275-201-71174784737331/AnsiballZ_systemd.py'
Feb 19 19:55:00 compute-0 sudo[70304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:01 compute-0 python3.9[70307]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 19:55:01 compute-0 systemd[1]: Reloading.
Feb 19 19:55:01 compute-0 systemd-rc-local-generator[70334]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 19:55:01 compute-0 systemd-sysv-generator[70338]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 19:55:01 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Feb 19 19:55:01 compute-0 iptables.init[70355]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Feb 19 19:55:01 compute-0 iptables.init[70355]: iptables: Flushing firewall rules: [  OK  ]
Feb 19 19:55:01 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Feb 19 19:55:01 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Feb 19 19:55:01 compute-0 sudo[70304]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:02 compute-0 sudo[70550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwbhlomzveulnnlyqgcboizmccbxmcdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530901.9435287-201-75879882085259/AnsiballZ_systemd.py'
Feb 19 19:55:02 compute-0 sudo[70550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:02 compute-0 python3.9[70553]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 19:55:02 compute-0 sudo[70550]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:02 compute-0 sudo[70705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkjhqiqmwvzttiwjwxwzudtvwhpqyfbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530902.7099688-217-162781394248866/AnsiballZ_systemd.py'
Feb 19 19:55:02 compute-0 sudo[70705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:03 compute-0 python3.9[70708]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 19:55:03 compute-0 systemd[1]: Reloading.
Feb 19 19:55:03 compute-0 systemd-rc-local-generator[70729]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 19:55:03 compute-0 systemd-sysv-generator[70738]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 19:55:03 compute-0 systemd[1]: Starting Netfilter Tables...
Feb 19 19:55:03 compute-0 systemd[1]: Finished Netfilter Tables.
Feb 19 19:55:03 compute-0 sudo[70705]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:04 compute-0 sudo[70906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpskzfmpimbbtsjmfzvkjyvjqbmdaptr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530903.6368496-225-49890252467171/AnsiballZ_command.py'
Feb 19 19:55:04 compute-0 sudo[70906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:04 compute-0 python3.9[70909]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:55:04 compute-0 sudo[70906]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:04 compute-0 sudo[71060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctatmxognvkguujtzeydkxqrnbyjswrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530904.5019624-239-166010809758577/AnsiballZ_stat.py'
Feb 19 19:55:04 compute-0 sudo[71060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:04 compute-0 python3.9[71063]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:55:04 compute-0 sudo[71060]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:05 compute-0 sudo[71186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpyapqmamdoslvzqaucwwfuqmchddjfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530904.5019624-239-166010809758577/AnsiballZ_copy.py'
Feb 19 19:55:05 compute-0 sudo[71186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:05 compute-0 python3.9[71189]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1771530904.5019624-239-166010809758577/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:55:05 compute-0 sudo[71186]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:05 compute-0 sudo[71340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqskgqgobnfrooasvboxstloqetcuxsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530905.6159644-254-82700694235871/AnsiballZ_systemd.py'
Feb 19 19:55:05 compute-0 sudo[71340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:06 compute-0 python3.9[71343]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 19 19:55:06 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Feb 19 19:55:06 compute-0 sshd[1015]: Received SIGHUP; restarting.
Feb 19 19:55:06 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Feb 19 19:55:06 compute-0 sshd[1015]: Server listening on 0.0.0.0 port 22.
Feb 19 19:55:06 compute-0 sshd[1015]: Server listening on :: port 22.
Feb 19 19:55:06 compute-0 sudo[71340]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:06 compute-0 sudo[71497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kidrwoqsldkkloeeoxhdluxmoquutfhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530906.3231072-262-137125349167393/AnsiballZ_file.py'
Feb 19 19:55:06 compute-0 sudo[71497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:06 compute-0 python3.9[71500]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:55:06 compute-0 sudo[71497]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:07 compute-0 sudo[71650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqtkqhhqrhvpyamncekhrrciirammfuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530906.8375928-270-243307818030663/AnsiballZ_stat.py'
Feb 19 19:55:07 compute-0 sudo[71650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:07 compute-0 python3.9[71653]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:55:07 compute-0 sudo[71650]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:07 compute-0 sudo[71774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdxyslmzvzboxjotxbuqexnuehhgsdhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530906.8375928-270-243307818030663/AnsiballZ_copy.py'
Feb 19 19:55:07 compute-0 sudo[71774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:07 compute-0 python3.9[71777]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771530906.8375928-270-243307818030663/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:55:07 compute-0 sudo[71774]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:08 compute-0 sudo[71927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sogixorwicpmhhtlyzpveiokybjnuxjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530908.104939-288-176300315641901/AnsiballZ_timezone.py'
Feb 19 19:55:08 compute-0 sudo[71927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:08 compute-0 python3.9[71930]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Feb 19 19:55:08 compute-0 systemd[1]: Starting Time & Date Service...
Feb 19 19:55:08 compute-0 systemd[1]: Started Time & Date Service.
Feb 19 19:55:08 compute-0 sudo[71927]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:09 compute-0 sudo[72084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnezvaxqntcukeuhjvztgxemasgvyduf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530908.9735227-297-242391506149303/AnsiballZ_file.py'
Feb 19 19:55:09 compute-0 sudo[72084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:09 compute-0 python3.9[72087]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:55:09 compute-0 sudo[72084]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:09 compute-0 sudo[72237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glkkwilvcfgokiixxgbaohimxpefxmtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530909.4884543-305-65859969952837/AnsiballZ_stat.py'
Feb 19 19:55:09 compute-0 sudo[72237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:09 compute-0 python3.9[72240]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:55:09 compute-0 sudo[72237]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:10 compute-0 sudo[72361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlhcgcyefnnvsncgypozxlajxtslkoky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530909.4884543-305-65859969952837/AnsiballZ_copy.py'
Feb 19 19:55:10 compute-0 sudo[72361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:10 compute-0 python3.9[72364]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771530909.4884543-305-65859969952837/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:55:10 compute-0 sudo[72361]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:10 compute-0 sudo[72514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smfktfslvggnxekvlbtgdkpfuhqgzuyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530910.5221407-320-222382009154138/AnsiballZ_stat.py'
Feb 19 19:55:10 compute-0 sudo[72514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:10 compute-0 python3.9[72517]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:55:10 compute-0 sudo[72514]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:11 compute-0 sudo[72638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkqnbtwvspoqqwwyaupdfhqaihysgfsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530910.5221407-320-222382009154138/AnsiballZ_copy.py'
Feb 19 19:55:11 compute-0 sudo[72638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:11 compute-0 python3.9[72641]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771530910.5221407-320-222382009154138/.source.yaml _original_basename=.xvam2ck3 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:55:11 compute-0 sudo[72638]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:11 compute-0 sudo[72791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgvjmjsrpmsxhsslbwjbwjnturawywfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530911.4615383-335-10845115002486/AnsiballZ_stat.py'
Feb 19 19:55:11 compute-0 sudo[72791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:11 compute-0 python3.9[72794]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:55:11 compute-0 sudo[72791]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:12 compute-0 sudo[72915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrdrwyvaufwvzvxmkbxwdcytvqnlcbzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530911.4615383-335-10845115002486/AnsiballZ_copy.py'
Feb 19 19:55:12 compute-0 sudo[72915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:12 compute-0 python3.9[72918]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771530911.4615383-335-10845115002486/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:55:12 compute-0 sudo[72915]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:12 compute-0 sudo[73068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oosoikghqkdovubnuljbauumjgsclqvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530912.4732237-350-248671988937236/AnsiballZ_command.py'
Feb 19 19:55:12 compute-0 sudo[73068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:12 compute-0 python3.9[73071]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:55:12 compute-0 sudo[73068]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:13 compute-0 sudo[73222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebnwoljkurdmewvqgpxqhcuuldszqdmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530913.0041494-358-153565037932819/AnsiballZ_command.py'
Feb 19 19:55:13 compute-0 sudo[73222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:13 compute-0 python3.9[73225]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:55:13 compute-0 sudo[73222]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:13 compute-0 sudo[73376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldfnyxroqdxjkfchdmafuzujnwfvhgus ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771530913.5404484-366-232890452528023/AnsiballZ_edpm_nftables_from_files.py'
Feb 19 19:55:13 compute-0 sudo[73376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:14 compute-0 python3[73379]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb 19 19:55:14 compute-0 sudo[73376]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:14 compute-0 sudo[73529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksmeatmjjumrlitunrmsatqjyiuneafj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530914.4139628-374-120226242008857/AnsiballZ_stat.py'
Feb 19 19:55:14 compute-0 sudo[73529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:15 compute-0 python3.9[73532]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:55:15 compute-0 sudo[73529]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:15 compute-0 sudo[73653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkgivjvmnhgnwapunhvmfkvosuakuetb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530914.4139628-374-120226242008857/AnsiballZ_copy.py'
Feb 19 19:55:15 compute-0 sudo[73653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:15 compute-0 python3.9[73656]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771530914.4139628-374-120226242008857/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:55:15 compute-0 sudo[73653]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:15 compute-0 sudo[73806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgyczazwmvipmjbtfqckrwgdunxqmctp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530915.6771355-389-209944708577894/AnsiballZ_stat.py'
Feb 19 19:55:15 compute-0 sudo[73806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:16 compute-0 python3.9[73809]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:55:16 compute-0 sudo[73806]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:16 compute-0 sudo[73930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hofoonvtcjxgllfqnnfhglakndfxfhok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530915.6771355-389-209944708577894/AnsiballZ_copy.py'
Feb 19 19:55:16 compute-0 sudo[73930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:16 compute-0 python3.9[73933]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771530915.6771355-389-209944708577894/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:55:16 compute-0 sudo[73930]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:16 compute-0 sudo[74083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okqxmiwqpibuuntvewvonwyielwdwoiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530916.6556678-404-73641396009724/AnsiballZ_stat.py'
Feb 19 19:55:16 compute-0 sudo[74083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:17 compute-0 python3.9[74086]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:55:17 compute-0 sudo[74083]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:17 compute-0 sudo[74207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtabucxtajomunajgfwobehtgkdhxzec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530916.6556678-404-73641396009724/AnsiballZ_copy.py'
Feb 19 19:55:17 compute-0 sudo[74207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:17 compute-0 python3.9[74210]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771530916.6556678-404-73641396009724/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:55:17 compute-0 sudo[74207]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:17 compute-0 sudo[74360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-borbxxorimgelzzdtordqzoeqywaccrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530917.6898806-419-81452869950326/AnsiballZ_stat.py'
Feb 19 19:55:17 compute-0 sudo[74360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:18 compute-0 python3.9[74363]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:55:18 compute-0 sudo[74360]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:18 compute-0 sudo[74484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buwroxedzcosldqnqoxblxepgbbxurrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530917.6898806-419-81452869950326/AnsiballZ_copy.py'
Feb 19 19:55:18 compute-0 sudo[74484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:18 compute-0 python3.9[74487]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771530917.6898806-419-81452869950326/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:55:18 compute-0 sudo[74484]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:18 compute-0 sudo[74637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvnofrgqrgzidruwwgdmpijvodurwwhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530918.702718-434-203382801869914/AnsiballZ_stat.py'
Feb 19 19:55:18 compute-0 sudo[74637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:19 compute-0 python3.9[74640]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:55:19 compute-0 sudo[74637]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:19 compute-0 sudo[74761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nebqwpmocuiryyyttzusqoaukafbkdfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530918.702718-434-203382801869914/AnsiballZ_copy.py'
Feb 19 19:55:19 compute-0 sudo[74761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:19 compute-0 python3.9[74764]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771530918.702718-434-203382801869914/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:55:19 compute-0 sudo[74761]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:19 compute-0 sudo[74914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcphfuemehzpypooohyzvutrkoqebvko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530919.7432392-449-276282135092544/AnsiballZ_file.py'
Feb 19 19:55:19 compute-0 sudo[74914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:20 compute-0 python3.9[74917]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:55:20 compute-0 sudo[74914]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:20 compute-0 sudo[75067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynpkyufwobvfbaysuhrfolkcbbcvulrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530920.2679706-457-203169822759013/AnsiballZ_command.py'
Feb 19 19:55:20 compute-0 sudo[75067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:20 compute-0 python3.9[75070]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:55:20 compute-0 sudo[75067]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:21 compute-0 sudo[75227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbitekevcaapoeyovmmqpqvtrjikhest ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530920.8290215-465-277840899622954/AnsiballZ_blockinfile.py'
Feb 19 19:55:21 compute-0 sudo[75227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:21 compute-0 python3.9[75230]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:55:21 compute-0 sudo[75227]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:21 compute-0 sudo[75381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxwnokzxrpnzscowycmpyaaawttjpvwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530921.6747024-474-237475848702014/AnsiballZ_file.py'
Feb 19 19:55:21 compute-0 sudo[75381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:22 compute-0 python3.9[75384]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:55:22 compute-0 sudo[75381]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:22 compute-0 sudo[75534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjcssrprxacodefbkcbdlnlxbiyxraue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530922.1644192-474-82633945104602/AnsiballZ_file.py'
Feb 19 19:55:22 compute-0 sudo[75534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:22 compute-0 python3.9[75537]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:55:22 compute-0 sudo[75534]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:23 compute-0 sudo[75687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwiotjqykvtrtebnydnblgtvzsvxnhew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530922.692801-489-70628148415643/AnsiballZ_mount.py'
Feb 19 19:55:23 compute-0 sudo[75687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:23 compute-0 python3.9[75690]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Feb 19 19:55:23 compute-0 sudo[75687]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:23 compute-0 rsyslogd[1014]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 19 19:55:23 compute-0 sudo[75842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtdcyuruggcafcohceyoyfrhrxljbpcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530923.4657354-489-217482948311922/AnsiballZ_mount.py'
Feb 19 19:55:23 compute-0 sudo[75842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:23 compute-0 python3.9[75845]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Feb 19 19:55:23 compute-0 sudo[75842]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:24 compute-0 sshd-session[66582]: Connection closed by 192.168.122.30 port 50826
Feb 19 19:55:24 compute-0 sshd-session[66579]: pam_unix(sshd:session): session closed for user zuul
Feb 19 19:55:24 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Feb 19 19:55:24 compute-0 systemd[1]: session-14.scope: Consumed 27.969s CPU time.
Feb 19 19:55:24 compute-0 systemd-logind[810]: Session 14 logged out. Waiting for processes to exit.
Feb 19 19:55:24 compute-0 systemd-logind[810]: Removed session 14.
Feb 19 19:55:29 compute-0 sshd-session[75871]: Accepted publickey for zuul from 192.168.122.30 port 49982 ssh2: ECDSA SHA256:U7+XUhHIIKxaxeCtrtx4n7poU9CMVA2TmDaaiHbw4x0
Feb 19 19:55:29 compute-0 systemd-logind[810]: New session 15 of user zuul.
Feb 19 19:55:29 compute-0 systemd[1]: Started Session 15 of User zuul.
Feb 19 19:55:29 compute-0 sshd-session[75871]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 19 19:55:29 compute-0 sudo[76024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-giyrhgizgywbmstpcgdxdtgpsewvxqxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530929.384786-16-228137867004723/AnsiballZ_tempfile.py'
Feb 19 19:55:29 compute-0 sudo[76024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:30 compute-0 python3.9[76027]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Feb 19 19:55:30 compute-0 sudo[76024]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:30 compute-0 sudo[76177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsagxcikchgtkqvafaxzrrdnvigzsmey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530930.1766305-28-228451544495006/AnsiballZ_stat.py'
Feb 19 19:55:30 compute-0 sudo[76177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:30 compute-0 python3.9[76180]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 19:55:30 compute-0 sudo[76177]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:31 compute-0 sudo[76330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvvpnyfhmmbcftzorfkddzerplqyyvgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530930.8940206-38-204531941679726/AnsiballZ_setup.py'
Feb 19 19:55:31 compute-0 sudo[76330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:31 compute-0 python3.9[76333]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 19 19:55:31 compute-0 sudo[76330]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:32 compute-0 sudo[76483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmqmvtmexpygtxqwxvdeegxaxeasajug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530931.881348-47-23291765377485/AnsiballZ_blockinfile.py'
Feb 19 19:55:32 compute-0 sudo[76483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:32 compute-0 python3.9[76486]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyXgwrNjTIDMhBYFbiiE0o8gREPgW8vSVtaiCA8/DQKczhjYR6rzpSSMFEYHFGLhO+xRgi9mzqB1WIYscbrqffbDgWeMUjYApA306bsdIompgVXqt99KEpuaLHbtrjOhC9WPVhJAfIhUgkqMq9YeXCKyMVxjDjE/OoTpyMMY7k3swTazwSUAKAfVT0u+knBlr3oif7+oOgZEhBe58VKlszOwbKMYVn+3AzPt4915SnJon8Q40nWNUIRIWJ5/HbyymhB6BZEOj+jQqxcE+IL8ESMe3BvK/nBvYSb7cofoRttmPA57BU/CJIWP1ODuRXvnJyVePXInWGgtH1050AybxVbWE76wJuZKb+RNgUX55v7Kn1Nn8ejUqKuItaUkdLVUvHH6DAosV2VwmbkMxQ6uokW37xXuEJtlF+/qRbrcCHicPrcdgjVmjU+T80K9PMYLjLj6EWp2EjDjlkHCMJn1y7ywmDJ3DH8AX98Mz59HyUhlyinIaAy6eLHEIsGhDdJRk=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBrOzF1T9Y94LYlxVtFcZ6AbjnAXLDEuUxnG45fvfeIw
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGOnCbQro38sqTUkZM4hVPZHk3la1rHEVo8zpE1ytgJTZaYEPZGTGGBRpgpCXBlUFiI0rHO2ttT8vgcltZmUYcg=
                                             create=True mode=0644 path=/tmp/ansible._q569uol state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:55:32 compute-0 sudo[76483]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:32 compute-0 sudo[76636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkojipuglhexpexjeyanhbqitkidpevh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530932.6098003-55-23592354726289/AnsiballZ_command.py'
Feb 19 19:55:32 compute-0 sudo[76636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:33 compute-0 python3.9[76639]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible._q569uol' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:55:33 compute-0 sudo[76636]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:33 compute-0 sudo[76791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhepokmirvnzvirhrhsvnavffxyzfrtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530933.3058755-63-222103690328332/AnsiballZ_file.py'
Feb 19 19:55:33 compute-0 sudo[76791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:33 compute-0 python3.9[76794]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible._q569uol state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:55:33 compute-0 sudo[76791]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:34 compute-0 sshd-session[75874]: Connection closed by 192.168.122.30 port 49982
Feb 19 19:55:34 compute-0 sshd-session[75871]: pam_unix(sshd:session): session closed for user zuul
Feb 19 19:55:34 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Feb 19 19:55:34 compute-0 systemd[1]: session-15.scope: Consumed 2.797s CPU time.
Feb 19 19:55:34 compute-0 systemd-logind[810]: Session 15 logged out. Waiting for processes to exit.
Feb 19 19:55:34 compute-0 systemd-logind[810]: Removed session 15.
Feb 19 19:55:38 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Feb 19 19:55:39 compute-0 sshd-session[76822]: Accepted publickey for zuul from 192.168.122.30 port 35064 ssh2: ECDSA SHA256:U7+XUhHIIKxaxeCtrtx4n7poU9CMVA2TmDaaiHbw4x0
Feb 19 19:55:39 compute-0 systemd-logind[810]: New session 16 of user zuul.
Feb 19 19:55:39 compute-0 systemd[1]: Started Session 16 of User zuul.
Feb 19 19:55:39 compute-0 sshd-session[76822]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 19 19:55:40 compute-0 python3.9[76977]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 19 19:55:41 compute-0 sudo[77131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edznekdxffoxvirknvbcpquxfchhxsmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530940.767563-27-75163942136204/AnsiballZ_systemd.py'
Feb 19 19:55:41 compute-0 sudo[77131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:41 compute-0 python3.9[77134]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Feb 19 19:55:41 compute-0 sudo[77131]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:41 compute-0 sudo[77286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxegcoqpliftovezbxwxmnxaqmkykfzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530941.735011-35-179248199661641/AnsiballZ_systemd.py'
Feb 19 19:55:41 compute-0 sudo[77286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:42 compute-0 python3.9[77289]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 19 19:55:42 compute-0 sudo[77286]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:42 compute-0 sudo[77440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imsgiscpzwurelopercbpdufpzswepra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530942.4738314-44-201822748440010/AnsiballZ_command.py'
Feb 19 19:55:42 compute-0 sudo[77440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:43 compute-0 python3.9[77443]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:55:43 compute-0 sudo[77440]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:43 compute-0 sudo[77594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvgfkohgmqvfdimsprlkqnghcyrfzbgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530943.271774-52-54688757430/AnsiballZ_stat.py'
Feb 19 19:55:43 compute-0 sudo[77594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:43 compute-0 python3.9[77597]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 19:55:43 compute-0 sudo[77594]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:44 compute-0 sudo[77749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jatkdiukrfdwgeltbpehpfgxqldmfvvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530944.0560493-60-37221745506367/AnsiballZ_command.py'
Feb 19 19:55:44 compute-0 sudo[77749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:44 compute-0 python3.9[77752]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:55:44 compute-0 sudo[77749]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:44 compute-0 sudo[77905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbkwtficoinmvxoaeashxlnfebymitoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530944.6308618-68-74767349644979/AnsiballZ_file.py'
Feb 19 19:55:44 compute-0 sudo[77905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:45 compute-0 python3.9[77908]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:55:45 compute-0 sudo[77905]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:45 compute-0 sshd-session[76825]: Connection closed by 192.168.122.30 port 35064
Feb 19 19:55:45 compute-0 sshd-session[76822]: pam_unix(sshd:session): session closed for user zuul
Feb 19 19:55:45 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Feb 19 19:55:45 compute-0 systemd[1]: session-16.scope: Consumed 3.825s CPU time.
Feb 19 19:55:45 compute-0 systemd-logind[810]: Session 16 logged out. Waiting for processes to exit.
Feb 19 19:55:45 compute-0 systemd-logind[810]: Removed session 16.
Feb 19 19:55:50 compute-0 sshd-session[76883]: Received disconnect from 103.103.245.7 port 46794:11: Bye Bye [preauth]
Feb 19 19:55:50 compute-0 sshd-session[76883]: Disconnected from authenticating user root 103.103.245.7 port 46794 [preauth]
Feb 19 19:55:50 compute-0 sshd-session[77933]: Accepted publickey for zuul from 192.168.122.30 port 51896 ssh2: ECDSA SHA256:U7+XUhHIIKxaxeCtrtx4n7poU9CMVA2TmDaaiHbw4x0
Feb 19 19:55:50 compute-0 systemd-logind[810]: New session 17 of user zuul.
Feb 19 19:55:50 compute-0 systemd[1]: Started Session 17 of User zuul.
Feb 19 19:55:50 compute-0 sshd-session[77933]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 19 19:55:51 compute-0 python3.9[78086]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 19 19:55:52 compute-0 sudo[78240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohvvpomgiuqxfemljsltsynfmhxpvtia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530951.8309412-29-36832772675803/AnsiballZ_setup.py'
Feb 19 19:55:52 compute-0 sudo[78240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:52 compute-0 python3.9[78243]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 19 19:55:52 compute-0 sudo[78240]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:52 compute-0 sudo[78325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilqfsyfqoxnqjihsckarlssvfydighks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530951.8309412-29-36832772675803/AnsiballZ_dnf.py'
Feb 19 19:55:52 compute-0 sudo[78325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:55:53 compute-0 python3.9[78328]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 19 19:55:54 compute-0 sudo[78325]: pam_unix(sudo:session): session closed for user root
Feb 19 19:55:55 compute-0 python3.9[78479]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:55:56 compute-0 python3.9[78630]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 19 19:55:56 compute-0 python3.9[78780]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 19:55:57 compute-0 python3.9[78930]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 19:55:57 compute-0 sshd-session[77936]: Connection closed by 192.168.122.30 port 51896
Feb 19 19:55:57 compute-0 sshd-session[77933]: pam_unix(sshd:session): session closed for user zuul
Feb 19 19:55:57 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Feb 19 19:55:57 compute-0 systemd[1]: session-17.scope: Consumed 5.115s CPU time.
Feb 19 19:55:57 compute-0 systemd-logind[810]: Session 17 logged out. Waiting for processes to exit.
Feb 19 19:55:57 compute-0 systemd-logind[810]: Removed session 17.
Feb 19 19:56:03 compute-0 sshd-session[78955]: Accepted publickey for zuul from 192.168.122.30 port 42508 ssh2: ECDSA SHA256:U7+XUhHIIKxaxeCtrtx4n7poU9CMVA2TmDaaiHbw4x0
Feb 19 19:56:03 compute-0 systemd-logind[810]: New session 18 of user zuul.
Feb 19 19:56:03 compute-0 systemd[1]: Started Session 18 of User zuul.
Feb 19 19:56:03 compute-0 sshd-session[78955]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 19 19:56:04 compute-0 python3.9[79108]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 19 19:56:05 compute-0 sudo[79262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrlovenrweekoraukvikapymkkocmses ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530965.2627318-45-17851260306884/AnsiballZ_file.py'
Feb 19 19:56:05 compute-0 sudo[79262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:05 compute-0 python3.9[79265]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:56:05 compute-0 sudo[79262]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:06 compute-0 sudo[79415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zugcrkqxfrfmbwoabatlhnqsuhgaziwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530965.9688232-45-42073965641235/AnsiballZ_file.py'
Feb 19 19:56:06 compute-0 sudo[79415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:06 compute-0 python3.9[79418]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:56:06 compute-0 sudo[79415]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:06 compute-0 sudo[79568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exfjfxltrvxhytowwnxrccnvkuicuqgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530966.5513442-60-177742668373522/AnsiballZ_stat.py'
Feb 19 19:56:06 compute-0 sudo[79568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:07 compute-0 python3.9[79571]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:56:07 compute-0 sudo[79568]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:07 compute-0 sudo[79692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plpoaadvejlzgkrekkrpzcyuycrxlchc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530966.5513442-60-177742668373522/AnsiballZ_copy.py'
Feb 19 19:56:07 compute-0 sudo[79692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:07 compute-0 python3.9[79695]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771530966.5513442-60-177742668373522/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=d19a225471962701f1a104063686a022fcd7438b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:56:07 compute-0 sudo[79692]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:08 compute-0 sudo[79845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slfyibtmeehgfxivdqpwxqnhlvljhpyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530967.861813-60-220500163996048/AnsiballZ_stat.py'
Feb 19 19:56:08 compute-0 sudo[79845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:08 compute-0 python3.9[79848]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:56:08 compute-0 sudo[79845]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:08 compute-0 sudo[79969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqnpiarwimcaqruuekxdhsgydftkhfwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530967.861813-60-220500163996048/AnsiballZ_copy.py'
Feb 19 19:56:08 compute-0 sudo[79969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:08 compute-0 python3.9[79972]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771530967.861813-60-220500163996048/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=3cea69d549c191ed947916c461cd51056178f80c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:56:08 compute-0 sudo[79969]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:09 compute-0 sudo[80122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-liphiopwquwerwbetqjcucrzgmecnyju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530968.8481069-60-119846664758043/AnsiballZ_stat.py'
Feb 19 19:56:09 compute-0 sudo[80122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:09 compute-0 python3.9[80125]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:56:09 compute-0 sudo[80122]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:09 compute-0 sudo[80246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiizwukawyuztgfduvimphgdylovwvjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530968.8481069-60-119846664758043/AnsiballZ_copy.py'
Feb 19 19:56:09 compute-0 sudo[80246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:09 compute-0 python3.9[80249]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771530968.8481069-60-119846664758043/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=b497391b32fe7f12d765d8406007b4b9e9f20e49 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:56:09 compute-0 sudo[80246]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:10 compute-0 sudo[80399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puvvqtjjwusviephswcluagdsisattlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530969.92104-104-100900923455023/AnsiballZ_file.py'
Feb 19 19:56:10 compute-0 sudo[80399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:10 compute-0 python3.9[80402]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:56:10 compute-0 sudo[80399]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:10 compute-0 sudo[80552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmsepeziyucvihewfpaiahndjqrixjre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530970.547196-104-272621855371115/AnsiballZ_file.py'
Feb 19 19:56:10 compute-0 sudo[80552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:10 compute-0 python3.9[80555]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:56:11 compute-0 sudo[80552]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:11 compute-0 sudo[80705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epdeytipmrygnypopomyirxpbwfbdqwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530971.1748743-119-2280251876439/AnsiballZ_stat.py'
Feb 19 19:56:11 compute-0 sudo[80705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:11 compute-0 python3.9[80708]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:56:11 compute-0 sudo[80705]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:11 compute-0 sudo[80829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhajqogrqkrncgaovzezuzzintaxesep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530971.1748743-119-2280251876439/AnsiballZ_copy.py'
Feb 19 19:56:11 compute-0 sudo[80829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:12 compute-0 python3.9[80832]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771530971.1748743-119-2280251876439/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=d893599ad99fed2399cd9b478876c13f3cf4e736 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:56:12 compute-0 sudo[80829]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:12 compute-0 sudo[80982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drredaoxsqkjivfozfejirzleajheabv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530972.237131-119-139031358831621/AnsiballZ_stat.py'
Feb 19 19:56:12 compute-0 sudo[80982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:12 compute-0 python3.9[80985]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:56:12 compute-0 sudo[80982]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:12 compute-0 sudo[81106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dduppbzmitgygjnlzyshwfrdcjemizsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530972.237131-119-139031358831621/AnsiballZ_copy.py'
Feb 19 19:56:12 compute-0 sudo[81106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:13 compute-0 python3.9[81109]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771530972.237131-119-139031358831621/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=3cea69d549c191ed947916c461cd51056178f80c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:56:13 compute-0 sudo[81106]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:13 compute-0 sudo[81259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eoyaqykaqytohrfosdjeayzjqaflnnww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530973.4517305-119-9355338097035/AnsiballZ_stat.py'
Feb 19 19:56:13 compute-0 sudo[81259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:13 compute-0 python3.9[81262]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:56:13 compute-0 sudo[81259]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:14 compute-0 sudo[81383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buvjxyfbglyyyhjfyymavcilpqqzkxbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530973.4517305-119-9355338097035/AnsiballZ_copy.py'
Feb 19 19:56:14 compute-0 sudo[81383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:14 compute-0 python3.9[81386]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771530973.4517305-119-9355338097035/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=7e922d6e61f3481b8a0525fd9f54fc0076b78aef backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:56:14 compute-0 sudo[81383]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:14 compute-0 sudo[81536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhbycavilwaoumqajolytuupbkpxadjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530974.5007093-163-107959698915009/AnsiballZ_file.py'
Feb 19 19:56:14 compute-0 sudo[81536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:14 compute-0 python3.9[81539]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:56:14 compute-0 sudo[81536]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:15 compute-0 sudo[81689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhuodzudpzvbcurcnmgnjavtclymwswj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530975.0699391-163-164553132455229/AnsiballZ_file.py'
Feb 19 19:56:15 compute-0 sudo[81689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:15 compute-0 python3.9[81692]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:56:15 compute-0 sudo[81689]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:15 compute-0 sudo[81842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhjfvjhzinrpzgjmfwxnrwfyaambbijt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530975.653517-178-23909182883141/AnsiballZ_stat.py'
Feb 19 19:56:15 compute-0 sudo[81842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:16 compute-0 python3.9[81845]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:56:16 compute-0 sudo[81842]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:16 compute-0 sudo[81966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thwbepxqolffpzyvstnnqbntimvygbfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530975.653517-178-23909182883141/AnsiballZ_copy.py'
Feb 19 19:56:16 compute-0 sudo[81966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:16 compute-0 python3.9[81969]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771530975.653517-178-23909182883141/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=7afaa6dc03db4f1b48cc707c1d37e910c7dfc23c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:56:16 compute-0 sudo[81966]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:16 compute-0 sudo[82119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqlceukbukiramkmfruklhvpvkymntoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530976.7605221-178-177313107224141/AnsiballZ_stat.py'
Feb 19 19:56:16 compute-0 sudo[82119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:17 compute-0 python3.9[82122]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:56:17 compute-0 sudo[82119]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:17 compute-0 sudo[82243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mokcrqjnrtcwrxpktbfbujkcbrtjsprd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530976.7605221-178-177313107224141/AnsiballZ_copy.py'
Feb 19 19:56:17 compute-0 sudo[82243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:17 compute-0 python3.9[82246]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771530976.7605221-178-177313107224141/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=724dd103dd667a5950f17d3814ac1149c7363169 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:56:17 compute-0 sudo[82243]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:17 compute-0 sudo[82396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwemppdnnrcqtewdgrqvjuqsnshbvbzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530977.7761397-178-184528276791879/AnsiballZ_stat.py'
Feb 19 19:56:17 compute-0 sudo[82396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:18 compute-0 python3.9[82399]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:56:18 compute-0 sudo[82396]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:18 compute-0 sudo[82520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvixhwmtwellexhbshetjjfrbrvxdywf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530977.7761397-178-184528276791879/AnsiballZ_copy.py'
Feb 19 19:56:18 compute-0 sudo[82520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:18 compute-0 python3.9[82523]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771530977.7761397-178-184528276791879/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=b66b08bfcd42cb99e94467f14d3aaaf8c6a31314 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:56:18 compute-0 sudo[82520]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:19 compute-0 sudo[82675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlqyzyaqhibyheehaukaknfyprdnkjyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530978.9343972-222-127362288194299/AnsiballZ_file.py'
Feb 19 19:56:19 compute-0 sudo[82675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:19 compute-0 python3.9[82678]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:56:19 compute-0 sudo[82675]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:19 compute-0 sudo[82828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtlleyezrrxzuxcauwxveryhlkbtkirg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530979.4191175-222-89596661870219/AnsiballZ_file.py'
Feb 19 19:56:19 compute-0 sudo[82828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:19 compute-0 python3.9[82831]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:56:19 compute-0 sudo[82828]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:20 compute-0 sudo[82981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-defszigngdzlxiferqmqjqopujqjniut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530979.960371-237-183342615105198/AnsiballZ_stat.py'
Feb 19 19:56:20 compute-0 sudo[82981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:20 compute-0 sshd-session[82572]: Received disconnect from 125.31.2.160 port 57758:11: Bye Bye [preauth]
Feb 19 19:56:20 compute-0 sshd-session[82572]: Disconnected from authenticating user root 125.31.2.160 port 57758 [preauth]
Feb 19 19:56:20 compute-0 python3.9[82984]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:56:20 compute-0 sudo[82981]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:20 compute-0 sudo[83105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byyigqaxvqfjkwauxuvtmpsrlvbeoefq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530979.960371-237-183342615105198/AnsiballZ_copy.py'
Feb 19 19:56:20 compute-0 sudo[83105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:20 compute-0 python3.9[83108]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771530979.960371-237-183342615105198/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=f9f0e55817992cecd590db9cbc7a25943a1e5167 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:56:21 compute-0 sudo[83105]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:21 compute-0 sudo[83258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvscvauqplkaonzomfibaihuzonfsbuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530981.1025143-237-119249163285861/AnsiballZ_stat.py'
Feb 19 19:56:21 compute-0 sudo[83258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:21 compute-0 python3.9[83261]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:56:21 compute-0 sudo[83258]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:21 compute-0 sudo[83382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eppmtsiaipoxcjwrioezamocqkvvjrwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530981.1025143-237-119249163285861/AnsiballZ_copy.py'
Feb 19 19:56:21 compute-0 sudo[83382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:22 compute-0 python3.9[83385]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771530981.1025143-237-119249163285861/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=c1cbc6bc0691e0e91a5e676419bb8d46878b1687 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:56:22 compute-0 sudo[83382]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:22 compute-0 sudo[83535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjmbxhfhecyhaauttujwwmvsmihepjmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530982.17972-237-226242793949183/AnsiballZ_stat.py'
Feb 19 19:56:22 compute-0 sudo[83535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:22 compute-0 python3.9[83538]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:56:22 compute-0 sudo[83535]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:22 compute-0 sudo[83659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmvfqdklzdzmxzbimakesgnuhyoyjyoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530982.17972-237-226242793949183/AnsiballZ_copy.py'
Feb 19 19:56:22 compute-0 sudo[83659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:23 compute-0 python3.9[83662]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771530982.17972-237-226242793949183/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=e5f61f9a00dcb5ad2ade6bedff35ca1b53d73e59 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:56:23 compute-0 sudo[83659]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:23 compute-0 sudo[83812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vridjpqjhlviajcxtvldgtisizynquhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530983.2188864-281-57078177232379/AnsiballZ_file.py'
Feb 19 19:56:23 compute-0 sudo[83812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:23 compute-0 python3.9[83815]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:56:23 compute-0 sudo[83812]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:23 compute-0 sudo[83965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmdapydggcmuamhryicarynhognnquzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530983.7805176-281-214903266828172/AnsiballZ_file.py'
Feb 19 19:56:23 compute-0 sudo[83965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:24 compute-0 python3.9[83968]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:56:24 compute-0 sudo[83965]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:24 compute-0 sudo[84118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sswayyspydhaapefskqrznrkkozwthqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530984.4957578-296-183326949298114/AnsiballZ_stat.py'
Feb 19 19:56:24 compute-0 sudo[84118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:25 compute-0 python3.9[84121]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:56:25 compute-0 sudo[84118]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:25 compute-0 sudo[84242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iblxiziqlojeeirkettbnbjppcqjjzqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530984.4957578-296-183326949298114/AnsiballZ_copy.py'
Feb 19 19:56:25 compute-0 sudo[84242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:25 compute-0 python3.9[84245]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771530984.4957578-296-183326949298114/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=1e720ec326d5840d7c046bc83c265b610047aef2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:56:25 compute-0 sudo[84242]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:25 compute-0 sudo[84395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amkrqwaxqzupuzemgvymuoddhouwilbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530985.6665442-296-117229047791352/AnsiballZ_stat.py'
Feb 19 19:56:25 compute-0 sudo[84395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:26 compute-0 python3.9[84398]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:56:26 compute-0 sudo[84395]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:26 compute-0 sudo[84519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fphavsicqahaystzznffwjubdfjpniyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530985.6665442-296-117229047791352/AnsiballZ_copy.py'
Feb 19 19:56:26 compute-0 sudo[84519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:26 compute-0 python3.9[84522]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771530985.6665442-296-117229047791352/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=724dd103dd667a5950f17d3814ac1149c7363169 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:56:26 compute-0 sudo[84519]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:26 compute-0 sudo[84672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfdyojccrwnzbenllyxkdhzioqtmhsrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530986.6842-296-143281913466775/AnsiballZ_stat.py'
Feb 19 19:56:26 compute-0 sudo[84672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:27 compute-0 python3.9[84675]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:56:27 compute-0 sudo[84672]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:27 compute-0 sudo[84796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwvcxfmmyqktbpqosmaaxeqaknpoouxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530986.6842-296-143281913466775/AnsiballZ_copy.py'
Feb 19 19:56:27 compute-0 sudo[84796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:27 compute-0 python3.9[84799]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771530986.6842-296-143281913466775/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=a8b1846af05a67627f2a820b7b12ad13c817196b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:56:27 compute-0 sudo[84796]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:28 compute-0 sudo[84949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jllfuhcknotgqdwzvommeoeoxdorbaii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530988.3646631-356-4695392394270/AnsiballZ_file.py'
Feb 19 19:56:28 compute-0 sudo[84949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:28 compute-0 python3.9[84952]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:56:28 compute-0 sudo[84949]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:29 compute-0 sudo[85102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epiasrwahvphnbvstioxorowyqgivgyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530988.9723356-364-6177147938022/AnsiballZ_stat.py'
Feb 19 19:56:29 compute-0 sudo[85102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:29 compute-0 python3.9[85105]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:56:29 compute-0 sudo[85102]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:29 compute-0 sudo[85226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzqtfamygssxcxqclkvtkdfdhfakzdln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530988.9723356-364-6177147938022/AnsiballZ_copy.py'
Feb 19 19:56:29 compute-0 sudo[85226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:29 compute-0 python3.9[85229]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771530988.9723356-364-6177147938022/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=883d2325b92facb0e9361f2e504f1e3488746e77 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:56:29 compute-0 sudo[85226]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:30 compute-0 sudo[85379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlmpxmhmxupoxvyhxrvdrfyoybmrkprl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530990.1053166-380-22546825715765/AnsiballZ_file.py'
Feb 19 19:56:30 compute-0 sudo[85379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:30 compute-0 python3.9[85382]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:56:30 compute-0 sudo[85379]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:30 compute-0 sudo[85532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuabvglfexkyikrnsxwlejkwwfilfasz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530990.6898592-388-40210883526629/AnsiballZ_stat.py'
Feb 19 19:56:30 compute-0 sudo[85532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:31 compute-0 python3.9[85535]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:56:31 compute-0 sudo[85532]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:31 compute-0 sudo[85656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efvpfifaadtzwgjsnahkosqoaajsndix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530990.6898592-388-40210883526629/AnsiballZ_copy.py'
Feb 19 19:56:31 compute-0 sudo[85656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:31 compute-0 python3.9[85659]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771530990.6898592-388-40210883526629/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=883d2325b92facb0e9361f2e504f1e3488746e77 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:56:31 compute-0 sudo[85656]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:31 compute-0 sudo[85809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-miqhpedjlewymcigbuxksfesqdkttayk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530991.772334-404-5980101027319/AnsiballZ_file.py'
Feb 19 19:56:31 compute-0 sudo[85809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:32 compute-0 python3.9[85812]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:56:32 compute-0 sudo[85809]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:32 compute-0 sudo[85962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjyvwpeytleqhszoumuvkzjjbxpevzvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530992.3498125-412-86427336103950/AnsiballZ_stat.py'
Feb 19 19:56:32 compute-0 sudo[85962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:32 compute-0 python3.9[85965]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:56:32 compute-0 sudo[85962]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:33 compute-0 sudo[86086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xednooeifqhdfooknkqrhatqlkqapspf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530992.3498125-412-86427336103950/AnsiballZ_copy.py'
Feb 19 19:56:33 compute-0 sudo[86086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:33 compute-0 python3.9[86089]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771530992.3498125-412-86427336103950/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=883d2325b92facb0e9361f2e504f1e3488746e77 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:56:33 compute-0 sudo[86086]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:33 compute-0 sudo[86239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzgpprinzqvziogbsjyfgoefwoflhwca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530993.511804-428-130819650835717/AnsiballZ_file.py'
Feb 19 19:56:33 compute-0 sudo[86239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:33 compute-0 python3.9[86242]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:56:33 compute-0 sudo[86239]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:34 compute-0 sudo[86392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxgvxweceyzfsplcrlolaglsgknnpmsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530994.0499659-436-237409891086577/AnsiballZ_stat.py'
Feb 19 19:56:34 compute-0 sudo[86392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:34 compute-0 python3.9[86395]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:56:34 compute-0 sudo[86392]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:34 compute-0 sudo[86516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvoalpfdjdcqrkjonwkqoekpjwjnntff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530994.0499659-436-237409891086577/AnsiballZ_copy.py'
Feb 19 19:56:34 compute-0 sudo[86516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:35 compute-0 python3.9[86519]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771530994.0499659-436-237409891086577/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=883d2325b92facb0e9361f2e504f1e3488746e77 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:56:35 compute-0 sudo[86516]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:35 compute-0 sudo[86669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wczgtmijfcszfmbndrgntrveeohjexth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530995.7024682-452-30137644614793/AnsiballZ_file.py'
Feb 19 19:56:35 compute-0 sudo[86669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:36 compute-0 python3.9[86672]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:56:36 compute-0 sudo[86669]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:36 compute-0 sudo[86822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyukasgybacvnbhhqxjhmjyeorzrirhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530996.2839246-460-148369156680138/AnsiballZ_stat.py'
Feb 19 19:56:36 compute-0 sudo[86822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:36 compute-0 python3.9[86825]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:56:36 compute-0 sudo[86822]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:37 compute-0 sudo[86946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyqwixfobvpgzkryhistyxrbqzisqcrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530996.2839246-460-148369156680138/AnsiballZ_copy.py'
Feb 19 19:56:37 compute-0 sudo[86946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:37 compute-0 python3.9[86949]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771530996.2839246-460-148369156680138/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=883d2325b92facb0e9361f2e504f1e3488746e77 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:56:37 compute-0 sudo[86946]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:37 compute-0 sudo[87099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtfcuplmkgeepvqynjeylmzpcqlnwzmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530997.3999531-476-192871859025848/AnsiballZ_file.py'
Feb 19 19:56:37 compute-0 sudo[87099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:37 compute-0 python3.9[87102]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:56:37 compute-0 sudo[87099]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:38 compute-0 sudo[87252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjbpiotqmdamxwgmoialfkbnddnhkifl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530998.1585567-484-5525345078556/AnsiballZ_stat.py'
Feb 19 19:56:38 compute-0 sudo[87252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:38 compute-0 python3.9[87255]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:56:38 compute-0 sudo[87252]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:38 compute-0 sudo[87376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcmtcstmjroigmktqcwffmuwlghcwypq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530998.1585567-484-5525345078556/AnsiballZ_copy.py'
Feb 19 19:56:38 compute-0 sudo[87376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:39 compute-0 python3.9[87379]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771530998.1585567-484-5525345078556/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=883d2325b92facb0e9361f2e504f1e3488746e77 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:56:39 compute-0 sudo[87376]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:39 compute-0 sudo[87529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsmbbdkteojbgqiijmupbbepiojqenwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530999.243566-500-224051396421938/AnsiballZ_file.py'
Feb 19 19:56:39 compute-0 sudo[87529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:39 compute-0 python3.9[87532]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:56:39 compute-0 sudo[87529]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:40 compute-0 sudo[87682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxhvnllhyacegvorysuaeppyifsrnxtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530999.8212283-508-7640342251499/AnsiballZ_stat.py'
Feb 19 19:56:40 compute-0 sudo[87682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:40 compute-0 python3.9[87685]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:56:40 compute-0 sudo[87682]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:40 compute-0 sudo[87806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixhfhmmyomozdqooodxzbcmguwxssyas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771530999.8212283-508-7640342251499/AnsiballZ_copy.py'
Feb 19 19:56:40 compute-0 sudo[87806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:40 compute-0 python3.9[87809]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771530999.8212283-508-7640342251499/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=883d2325b92facb0e9361f2e504f1e3488746e77 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:56:40 compute-0 sudo[87806]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:41 compute-0 sudo[87959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzuafukdtjxlrbqciqgzbuganvqifmmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531000.9025164-524-56959513555981/AnsiballZ_file.py'
Feb 19 19:56:41 compute-0 sudo[87959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:41 compute-0 python3.9[87962]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:56:41 compute-0 sudo[87959]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:41 compute-0 sudo[88112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpwdyjxbprprpnqourdlamcnkticjzeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531001.479299-532-224293112626997/AnsiballZ_stat.py'
Feb 19 19:56:41 compute-0 sudo[88112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:41 compute-0 python3.9[88115]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:56:41 compute-0 sudo[88112]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:42 compute-0 sudo[88236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewioslfmhlsonglulrlfteejjegnjpby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531001.479299-532-224293112626997/AnsiballZ_copy.py'
Feb 19 19:56:42 compute-0 sudo[88236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:42 compute-0 python3.9[88239]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771531001.479299-532-224293112626997/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=883d2325b92facb0e9361f2e504f1e3488746e77 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:56:42 compute-0 sudo[88236]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:43 compute-0 sshd-session[78958]: Connection closed by 192.168.122.30 port 42508
Feb 19 19:56:43 compute-0 sshd-session[78955]: pam_unix(sshd:session): session closed for user zuul
Feb 19 19:56:43 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Feb 19 19:56:43 compute-0 systemd[1]: session-18.scope: Consumed 28.912s CPU time.
Feb 19 19:56:43 compute-0 systemd-logind[810]: Session 18 logged out. Waiting for processes to exit.
Feb 19 19:56:43 compute-0 systemd-logind[810]: Removed session 18.
Feb 19 19:56:47 compute-0 chronyd[66553]: Selected source 23.133.168.244 (pool.ntp.org)
Feb 19 19:56:48 compute-0 sshd-session[88264]: Accepted publickey for zuul from 192.168.122.30 port 53570 ssh2: ECDSA SHA256:U7+XUhHIIKxaxeCtrtx4n7poU9CMVA2TmDaaiHbw4x0
Feb 19 19:56:48 compute-0 systemd-logind[810]: New session 19 of user zuul.
Feb 19 19:56:48 compute-0 systemd[1]: Started Session 19 of User zuul.
Feb 19 19:56:48 compute-0 sshd-session[88264]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 19 19:56:49 compute-0 python3.9[88417]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 19 19:56:50 compute-0 sudo[88571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuwxwdedlsfcaddurfyoyjywapmkushu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531009.8260856-29-32459161667542/AnsiballZ_file.py'
Feb 19 19:56:50 compute-0 sudo[88571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:50 compute-0 python3.9[88574]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:56:50 compute-0 sudo[88571]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:50 compute-0 sudo[88724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdknwocxhmllxqcrrovjhbqgefvddwud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531010.4775293-29-222476739146764/AnsiballZ_file.py'
Feb 19 19:56:50 compute-0 sudo[88724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:50 compute-0 python3.9[88727]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:56:50 compute-0 sudo[88724]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:51 compute-0 python3.9[88877]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 19 19:56:52 compute-0 sudo[89027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wttbablynhvdqiaywtjuppzlvopvscgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531011.90909-52-191896430487674/AnsiballZ_seboolean.py'
Feb 19 19:56:52 compute-0 sudo[89027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:52 compute-0 python3.9[89030]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Feb 19 19:56:53 compute-0 sudo[89027]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:53 compute-0 sudo[89184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdjlumfrzoeqbpeyhyjkckfidpgtxtzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531013.714366-62-219292222582123/AnsiballZ_setup.py'
Feb 19 19:56:53 compute-0 dbus-broker-launch[789]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Feb 19 19:56:53 compute-0 sudo[89184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:54 compute-0 python3.9[89187]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 19 19:56:54 compute-0 sudo[89184]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:54 compute-0 sudo[89269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juxgszoponesysllwpjbapbqtmxfwqnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531013.714366-62-219292222582123/AnsiballZ_dnf.py'
Feb 19 19:56:54 compute-0 sudo[89269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:55 compute-0 python3.9[89272]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 19 19:56:56 compute-0 sudo[89269]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:57 compute-0 sudo[89423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgumrmbsqbewkivcszcaoialynqzeksb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531016.690036-74-107586744725901/AnsiballZ_systemd.py'
Feb 19 19:56:57 compute-0 sudo[89423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:57 compute-0 python3.9[89426]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 19 19:56:57 compute-0 sudo[89423]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:58 compute-0 sudo[89579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tykfrzumolnrantwehijathfepiiubrg ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771531017.7931547-82-229137723479099/AnsiballZ_edpm_nftables_snippet.py'
Feb 19 19:56:58 compute-0 sudo[89579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:58 compute-0 python3[89582]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                            rule:
                                              proto: udp
                                              dport: 4789
                                          - rule_name: 119 neutron geneve networks
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              state: ["UNTRACKED"]
                                          - rule_name: 120 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: OUTPUT
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                          - rule_name: 121 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: PREROUTING
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                           dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Feb 19 19:56:58 compute-0 sudo[89579]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:58 compute-0 sudo[89732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btturcirdikjylfpmqkjuvnoseggcvtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531018.624934-91-208280117714623/AnsiballZ_file.py'
Feb 19 19:56:58 compute-0 sudo[89732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:59 compute-0 python3.9[89735]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:56:59 compute-0 sudo[89732]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:59 compute-0 sudo[89885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpgrtauhmpawyctqdmbtrfrzjbxhyeib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531019.165886-99-48580628822975/AnsiballZ_stat.py'
Feb 19 19:56:59 compute-0 sudo[89885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:56:59 compute-0 python3.9[89888]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:56:59 compute-0 sudo[89885]: pam_unix(sudo:session): session closed for user root
Feb 19 19:56:59 compute-0 sudo[89964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usarpggfhcahmfvlvxuuxwcyqljcwfuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531019.165886-99-48580628822975/AnsiballZ_file.py'
Feb 19 19:56:59 compute-0 sudo[89964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:00 compute-0 python3.9[89967]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:57:00 compute-0 sudo[89964]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:00 compute-0 sudo[90117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kusltqiryqsswxnafvfaftddwkzrphlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531020.3549354-111-228953009104526/AnsiballZ_stat.py'
Feb 19 19:57:00 compute-0 sudo[90117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:00 compute-0 python3.9[90120]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:57:00 compute-0 sudo[90117]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:01 compute-0 sudo[90196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnhxmgogborrjvtgnrhklbgwdksnmxhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531020.3549354-111-228953009104526/AnsiballZ_file.py'
Feb 19 19:57:01 compute-0 sudo[90196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:01 compute-0 python3.9[90199]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.2xuspdt6 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:57:01 compute-0 sudo[90196]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:01 compute-0 sudo[90349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmwqxkjmyvqsceaxrjegkdkqysscyigc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531021.3903453-123-131851081451043/AnsiballZ_stat.py'
Feb 19 19:57:01 compute-0 sudo[90349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:01 compute-0 python3.9[90352]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:57:01 compute-0 sudo[90349]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:02 compute-0 sudo[90428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgnhyxebsekpbwxvpygrbuijejwphkil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531021.3903453-123-131851081451043/AnsiballZ_file.py'
Feb 19 19:57:02 compute-0 sudo[90428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:02 compute-0 python3.9[90431]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:57:02 compute-0 sudo[90428]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:02 compute-0 sudo[90581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbjfpmhdbtjqzhwauuhvgeglkrjkgujv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531022.443023-136-51723660139063/AnsiballZ_command.py'
Feb 19 19:57:02 compute-0 sudo[90581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:03 compute-0 python3.9[90584]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:57:03 compute-0 sudo[90581]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:03 compute-0 sudo[90735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klkqdcftagpyzdqlpcomymhzcqtzemwa ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771531023.2490304-144-247700712515373/AnsiballZ_edpm_nftables_from_files.py'
Feb 19 19:57:03 compute-0 sudo[90735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:03 compute-0 python3[90738]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb 19 19:57:03 compute-0 sudo[90735]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:04 compute-0 sudo[90888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbijdhkjbsylvbeikpcocrqmoeppnhbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531024.0337644-152-257214922173775/AnsiballZ_stat.py'
Feb 19 19:57:04 compute-0 sudo[90888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:04 compute-0 python3.9[90891]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:57:04 compute-0 sudo[90888]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:05 compute-0 sudo[91014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlutbbtpkihbyxtaeoxnrzzkkmvvpfrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531024.0337644-152-257214922173775/AnsiballZ_copy.py'
Feb 19 19:57:05 compute-0 sudo[91014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:05 compute-0 python3.9[91017]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771531024.0337644-152-257214922173775/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:57:05 compute-0 sudo[91014]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:05 compute-0 sudo[91167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-liaoemytwajayjtwlkimgyjobirvjvxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531025.5134428-167-274873400411094/AnsiballZ_stat.py'
Feb 19 19:57:05 compute-0 sudo[91167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:05 compute-0 python3.9[91170]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:57:05 compute-0 sudo[91167]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:06 compute-0 sudo[91295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iirosrcmtvrolwvcsvncdxrqylkbcrmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531025.5134428-167-274873400411094/AnsiballZ_copy.py'
Feb 19 19:57:06 compute-0 sudo[91295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:06 compute-0 python3.9[91298]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771531025.5134428-167-274873400411094/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:57:06 compute-0 sudo[91295]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:06 compute-0 sudo[91448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izazvgrcffshwgbfvpdtcqzpemsaadev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531026.5822096-182-182590531659235/AnsiballZ_stat.py'
Feb 19 19:57:06 compute-0 sudo[91448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:07 compute-0 python3.9[91451]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:57:07 compute-0 sudo[91448]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:07 compute-0 sshd-session[91171]: Invalid user claude from 103.154.77.48 port 41988
Feb 19 19:57:07 compute-0 sudo[91574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayuokekapzzoorrlebluxaidhmkdnrll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531026.5822096-182-182590531659235/AnsiballZ_copy.py'
Feb 19 19:57:07 compute-0 sudo[91574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:07 compute-0 python3.9[91577]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771531026.5822096-182-182590531659235/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:57:07 compute-0 sshd-session[91171]: Received disconnect from 103.154.77.48 port 41988:11: Bye Bye [preauth]
Feb 19 19:57:07 compute-0 sshd-session[91171]: Disconnected from invalid user claude 103.154.77.48 port 41988 [preauth]
Feb 19 19:57:07 compute-0 sudo[91574]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:07 compute-0 sudo[91727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mizdffjhatnyelpfubbsxcizjacijzho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531027.6898837-197-281405940578470/AnsiballZ_stat.py'
Feb 19 19:57:07 compute-0 sudo[91727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:08 compute-0 python3.9[91730]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:57:08 compute-0 sudo[91727]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:08 compute-0 sudo[91853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvjergfgfvqzijjxfnmysjrodioalwce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531027.6898837-197-281405940578470/AnsiballZ_copy.py'
Feb 19 19:57:08 compute-0 sudo[91853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:08 compute-0 python3.9[91856]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771531027.6898837-197-281405940578470/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:57:08 compute-0 sudo[91853]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:09 compute-0 sudo[92006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imdrchhntdyhdgyqhhopetkaskhrvnbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531028.7470622-212-271858922180760/AnsiballZ_stat.py'
Feb 19 19:57:09 compute-0 sudo[92006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:09 compute-0 python3.9[92009]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:57:09 compute-0 sudo[92006]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:09 compute-0 sudo[92132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkwqzfttfcvjtkngufannuswgtumlkzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531028.7470622-212-271858922180760/AnsiballZ_copy.py'
Feb 19 19:57:09 compute-0 sudo[92132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:09 compute-0 python3.9[92135]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771531028.7470622-212-271858922180760/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:57:10 compute-0 sudo[92132]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:10 compute-0 sudo[92285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbqnvjmncqhzbyzmsyxejzfwbnzpabei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531030.1555257-227-197347412835991/AnsiballZ_file.py'
Feb 19 19:57:10 compute-0 sudo[92285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:10 compute-0 python3.9[92288]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:57:10 compute-0 sudo[92285]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:10 compute-0 sudo[92438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfsrimnbzasvimgrpwfinixtruhsnltk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531030.715336-235-149118186149872/AnsiballZ_command.py'
Feb 19 19:57:10 compute-0 sudo[92438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:11 compute-0 python3.9[92441]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:57:11 compute-0 sudo[92438]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:11 compute-0 sudo[92594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkkaamcpjbpudsafvrhvefvshlxjkqec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531031.344204-243-198977977435439/AnsiballZ_blockinfile.py'
Feb 19 19:57:11 compute-0 sudo[92594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:11 compute-0 python3.9[92597]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:57:11 compute-0 sudo[92594]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:12 compute-0 sudo[92747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhyqlxbmqqquitvlhxzgfmjrpgzsbcwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531032.1861193-252-271840615386388/AnsiballZ_command.py'
Feb 19 19:57:12 compute-0 sudo[92747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:12 compute-0 python3.9[92750]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:57:12 compute-0 sudo[92747]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:12 compute-0 sudo[92901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfxpmrmlvikyoxjqrdpycvvgipqfervi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531032.7756293-260-107175495055615/AnsiballZ_stat.py'
Feb 19 19:57:13 compute-0 sudo[92901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:13 compute-0 python3.9[92904]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 19:57:13 compute-0 sudo[92901]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:13 compute-0 sudo[93056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phrsjaqmgktewsndziwoakouqpilbdti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531033.354092-268-25524594502855/AnsiballZ_command.py'
Feb 19 19:57:13 compute-0 sudo[93056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:13 compute-0 python3.9[93059]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:57:13 compute-0 sudo[93056]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:14 compute-0 sudo[93212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qorfazjmcqrlyorikiseuqzwzwbzxuta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531034.0168297-276-207331100023750/AnsiballZ_file.py'
Feb 19 19:57:14 compute-0 sudo[93212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:14 compute-0 python3.9[93215]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:57:14 compute-0 sudo[93212]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:15 compute-0 python3.9[93365]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 19 19:57:16 compute-0 sudo[93516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txivoagnjmghfmmpftmsqutjevogjqyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531036.2998798-317-237236645658833/AnsiballZ_command.py'
Feb 19 19:57:16 compute-0 sudo[93516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:16 compute-0 python3.9[93519]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:0e:0a:2f:db:26:37" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:57:16 compute-0 ovs-vsctl[93520]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:0e:0a:2f:db:26:37 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Feb 19 19:57:16 compute-0 sudo[93516]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:17 compute-0 sudo[93670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnrrhhtfpvkeilidlnvygidyyyvhhbaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531036.9311273-326-168280051915920/AnsiballZ_command.py'
Feb 19 19:57:17 compute-0 sudo[93670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:17 compute-0 python3.9[93673]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                            ovs-vsctl show | grep -q "Manager"
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:57:17 compute-0 sudo[93670]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:17 compute-0 sudo[93826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnmmjrjycnksraffuxqujexcwwogwgci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531037.498021-334-203747374575521/AnsiballZ_command.py'
Feb 19 19:57:17 compute-0 sudo[93826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:17 compute-0 python3.9[93829]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:57:17 compute-0 ovs-vsctl[93830]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Feb 19 19:57:17 compute-0 sudo[93826]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:18 compute-0 python3.9[93980]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 19:57:19 compute-0 sudo[94132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smurmnyjmfcdbhrsfahsfimovtjkuibm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531038.8827455-351-58822171409486/AnsiballZ_file.py'
Feb 19 19:57:19 compute-0 sudo[94132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:19 compute-0 python3.9[94135]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:57:19 compute-0 sudo[94132]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:19 compute-0 sudo[94285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfrfnziuhzuzjegxtiahqbmcamlbvvuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531039.4409316-359-68702724286362/AnsiballZ_stat.py'
Feb 19 19:57:19 compute-0 sudo[94285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:19 compute-0 python3.9[94288]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:57:19 compute-0 sudo[94285]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:19 compute-0 sudo[94364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpgmdapcglhfnymkxaowhtfxehsflurj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531039.4409316-359-68702724286362/AnsiballZ_file.py'
Feb 19 19:57:19 compute-0 sudo[94364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:20 compute-0 python3.9[94367]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:57:20 compute-0 sudo[94364]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:20 compute-0 sudo[94519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtgqbykzzipbndvrztjgglefcksnaykj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531040.3286688-359-207699974680916/AnsiballZ_stat.py'
Feb 19 19:57:20 compute-0 sudo[94519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:20 compute-0 python3.9[94522]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:57:20 compute-0 sudo[94519]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:21 compute-0 sudo[94598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwqhhuixsjfffdefonjapeiorveksult ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531040.3286688-359-207699974680916/AnsiballZ_file.py'
Feb 19 19:57:21 compute-0 sudo[94598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:21 compute-0 python3.9[94601]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:57:21 compute-0 sudo[94598]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:21 compute-0 sshd-session[94368]: Invalid user claude from 103.213.238.91 port 40362
Feb 19 19:57:21 compute-0 sudo[94751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upuydfhlhlfpaqspnosrctmylzqticfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531041.3814878-382-73351895676205/AnsiballZ_file.py'
Feb 19 19:57:21 compute-0 sudo[94751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:21 compute-0 python3.9[94754]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:57:21 compute-0 sshd-session[94368]: Received disconnect from 103.213.238.91 port 40362:11: Bye Bye [preauth]
Feb 19 19:57:21 compute-0 sshd-session[94368]: Disconnected from invalid user claude 103.213.238.91 port 40362 [preauth]
Feb 19 19:57:21 compute-0 sudo[94751]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:22 compute-0 sudo[94904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkjqbndcatrzbuzdulsvserncxhvocob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531041.94902-390-71670345416801/AnsiballZ_stat.py'
Feb 19 19:57:22 compute-0 sudo[94904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:22 compute-0 python3.9[94907]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:57:22 compute-0 sudo[94904]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:22 compute-0 sudo[94983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdwdxzbykxcddqyptlaeihagdyykmsqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531041.94902-390-71670345416801/AnsiballZ_file.py'
Feb 19 19:57:22 compute-0 sudo[94983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:22 compute-0 python3.9[94986]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:57:22 compute-0 sudo[94983]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:23 compute-0 sudo[95136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrgsmafgtwccwvfgfpkepajcnkdwfjpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531042.8903942-402-85362123544614/AnsiballZ_stat.py'
Feb 19 19:57:23 compute-0 sudo[95136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:23 compute-0 python3.9[95139]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:57:23 compute-0 sudo[95136]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:23 compute-0 sudo[95215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrmxqcgycsiapngrrqcmbkwrrkjtstky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531042.8903942-402-85362123544614/AnsiballZ_file.py'
Feb 19 19:57:23 compute-0 sudo[95215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:23 compute-0 python3.9[95218]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:57:23 compute-0 sudo[95215]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:24 compute-0 sudo[95368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlraqihkquooldgkjnkyrrlveznfhbbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531043.875567-414-249081160773713/AnsiballZ_systemd.py'
Feb 19 19:57:24 compute-0 sudo[95368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:24 compute-0 python3.9[95371]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 19:57:24 compute-0 systemd[1]: Reloading.
Feb 19 19:57:24 compute-0 systemd-rc-local-generator[95400]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 19:57:24 compute-0 systemd-sysv-generator[95403]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 19:57:24 compute-0 sudo[95368]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:24 compute-0 sudo[95565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pthxuruquyqpkdrkxzpfkdneetczzhaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531044.7469552-422-153591174816093/AnsiballZ_stat.py'
Feb 19 19:57:24 compute-0 sudo[95565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:25 compute-0 python3.9[95568]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:57:25 compute-0 sudo[95565]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:25 compute-0 sudo[95644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujzhjfdxctfcgwigunlozengoexekxia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531044.7469552-422-153591174816093/AnsiballZ_file.py'
Feb 19 19:57:25 compute-0 sudo[95644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:25 compute-0 python3.9[95647]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:57:25 compute-0 sudo[95644]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:26 compute-0 sudo[95797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leoifsuquzuvbgeqhluixcwlnkreodpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531045.7945783-434-229724432864355/AnsiballZ_stat.py'
Feb 19 19:57:26 compute-0 sudo[95797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:26 compute-0 python3.9[95800]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:57:26 compute-0 sudo[95797]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:26 compute-0 sudo[95876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhqthzowkiomjgcnwnqomltjggfblykr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531045.7945783-434-229724432864355/AnsiballZ_file.py'
Feb 19 19:57:26 compute-0 sudo[95876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:26 compute-0 python3.9[95879]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:57:26 compute-0 sudo[95876]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:27 compute-0 sudo[96029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ickwtoszrkoztjnyfybqpjseltvulscp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531046.853647-446-117016668890697/AnsiballZ_systemd.py'
Feb 19 19:57:27 compute-0 sudo[96029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:27 compute-0 python3.9[96032]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 19:57:27 compute-0 systemd[1]: Reloading.
Feb 19 19:57:27 compute-0 systemd-rc-local-generator[96060]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 19:57:27 compute-0 systemd-sysv-generator[96063]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 19:57:27 compute-0 systemd[1]: Starting Create netns directory...
Feb 19 19:57:27 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb 19 19:57:27 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb 19 19:57:27 compute-0 systemd[1]: Finished Create netns directory.
Feb 19 19:57:27 compute-0 sudo[96029]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:28 compute-0 sudo[96230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blbovofmfyatkhwegzvbojwpepnihjfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531047.8222353-456-196017837317810/AnsiballZ_file.py'
Feb 19 19:57:28 compute-0 sudo[96230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:28 compute-0 python3.9[96233]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:57:28 compute-0 sudo[96230]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:28 compute-0 sudo[96383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwhldfrutkznyjrtaekacfbxsdfiaram ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531048.350744-464-102788766442501/AnsiballZ_stat.py'
Feb 19 19:57:28 compute-0 sudo[96383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:28 compute-0 python3.9[96386]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:57:28 compute-0 sudo[96383]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:29 compute-0 sudo[96507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfgmvxcknarvncvyotwzgbukbpqicjob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531048.350744-464-102788766442501/AnsiballZ_copy.py'
Feb 19 19:57:29 compute-0 sudo[96507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:29 compute-0 python3.9[96510]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771531048.350744-464-102788766442501/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:57:29 compute-0 sudo[96507]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:29 compute-0 sudo[96660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuyjoxiqlabvroqynwmmugshstwelktx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531049.5730305-481-46512210277821/AnsiballZ_file.py'
Feb 19 19:57:29 compute-0 sudo[96660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:30 compute-0 python3.9[96663]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:57:30 compute-0 sudo[96660]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:30 compute-0 sudo[96813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcehppulrodxgaabtdashrlhurieivsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531050.176765-489-7104467938968/AnsiballZ_file.py'
Feb 19 19:57:30 compute-0 sudo[96813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:30 compute-0 python3.9[96816]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:57:30 compute-0 sudo[96813]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:31 compute-0 sudo[96966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyimfkcmprpykybyugrhbpapayjyedcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531050.7774007-497-161483891511925/AnsiballZ_stat.py'
Feb 19 19:57:31 compute-0 sudo[96966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:31 compute-0 python3.9[96969]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:57:31 compute-0 sudo[96966]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:31 compute-0 sudo[97090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlxukdplgduyhrhfbbsxvznbactbduqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531050.7774007-497-161483891511925/AnsiballZ_copy.py'
Feb 19 19:57:31 compute-0 sudo[97090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:31 compute-0 python3.9[97093]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1771531050.7774007-497-161483891511925/.source.json _original_basename=.87vff5wc follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:57:31 compute-0 sudo[97090]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:32 compute-0 python3.9[97243]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:57:33 compute-0 sudo[97664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxnymavowxrgfvqvtkvaxxlvovjleucm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531053.6015933-537-92389498526124/AnsiballZ_container_config_data.py'
Feb 19 19:57:33 compute-0 sudo[97664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:34 compute-0 python3.9[97667]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Feb 19 19:57:34 compute-0 sudo[97664]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:34 compute-0 sudo[97817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfscofyncxesqelbdlefmcmbuszawomh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531054.4177287-548-15710945910183/AnsiballZ_container_config_hash.py'
Feb 19 19:57:34 compute-0 sudo[97817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:34 compute-0 python3.9[97820]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 19 19:57:35 compute-0 sudo[97817]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:35 compute-0 sudo[97970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moyltfkktanekdpxxhmhfpwcitaodtqu ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771531055.2432804-558-122612711061520/AnsiballZ_edpm_container_manage.py'
Feb 19 19:57:35 compute-0 sudo[97970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:35 compute-0 python3[97973]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Feb 19 19:57:35 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 19 19:57:35 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 19 19:57:36 compute-0 podman[98010]: 2026-02-19 19:57:36.060206353 +0000 UTC m=+0.036652451 container create 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.build-date=20260127, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 19 19:57:36 compute-0 podman[98010]: 2026-02-19 19:57:36.040113668 +0000 UTC m=+0.016559796 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Feb 19 19:57:36 compute-0 python3[97973]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2 --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Feb 19 19:57:36 compute-0 sudo[97970]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:36 compute-0 sudo[98198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsmuxylnlejrevbfwpdlkfkuqydyphrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531056.2901034-566-31195053039984/AnsiballZ_stat.py'
Feb 19 19:57:36 compute-0 sudo[98198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:36 compute-0 python3.9[98201]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 19:57:36 compute-0 sudo[98198]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:36 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 19 19:57:37 compute-0 sudo[98353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jndkgfpviahclkxqlopdxkxtvcoythce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531056.9514039-575-191635641602967/AnsiballZ_file.py'
Feb 19 19:57:37 compute-0 sudo[98353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:37 compute-0 python3.9[98356]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:57:37 compute-0 sudo[98353]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:37 compute-0 sudo[98430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvjkobtbsqwcdqmukabbgocbsqfqiimp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531056.9514039-575-191635641602967/AnsiballZ_stat.py'
Feb 19 19:57:37 compute-0 sudo[98430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:37 compute-0 python3.9[98433]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 19:57:37 compute-0 sudo[98430]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:38 compute-0 sudo[98582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxowapojcdydwkezesrxjypzlsstffnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531057.7944512-575-38332417975929/AnsiballZ_copy.py'
Feb 19 19:57:38 compute-0 sudo[98582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:38 compute-0 python3.9[98585]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771531057.7944512-575-38332417975929/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:57:38 compute-0 sudo[98582]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:38 compute-0 sudo[98659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uveqprafypobqahbyghrcohtfjjnqait ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531057.7944512-575-38332417975929/AnsiballZ_systemd.py'
Feb 19 19:57:38 compute-0 sudo[98659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:38 compute-0 python3.9[98662]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 19 19:57:38 compute-0 systemd[1]: Reloading.
Feb 19 19:57:38 compute-0 systemd-rc-local-generator[98681]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 19:57:38 compute-0 systemd-sysv-generator[98689]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 19:57:39 compute-0 sudo[98659]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:39 compute-0 sudo[98777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brrpffjmjlqitddkdcskpmxkqghqwqsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531057.7944512-575-38332417975929/AnsiballZ_systemd.py'
Feb 19 19:57:39 compute-0 sudo[98777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:39 compute-0 python3.9[98780]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 19:57:39 compute-0 systemd[1]: Reloading.
Feb 19 19:57:39 compute-0 systemd-rc-local-generator[98809]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 19:57:39 compute-0 systemd-sysv-generator[98815]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 19:57:39 compute-0 systemd[1]: Starting ovn_controller container...
Feb 19 19:57:39 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Feb 19 19:57:39 compute-0 systemd[1]: Started libcrun container.
Feb 19 19:57:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35bfeb940bc50ad4844b5f196af76e10f5a9d59310b74af8265258c713ef2305/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Feb 19 19:57:39 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0.
Feb 19 19:57:39 compute-0 podman[98828]: 2026-02-19 19:57:39.882406138 +0000 UTC m=+0.138743346 container init 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 19 19:57:39 compute-0 ovn_controller[98843]: + sudo -E kolla_set_configs
Feb 19 19:57:39 compute-0 podman[98828]: 2026-02-19 19:57:39.910554894 +0000 UTC m=+0.166892132 container start 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true)
Feb 19 19:57:39 compute-0 edpm-start-podman-container[98828]: ovn_controller
Feb 19 19:57:39 compute-0 systemd[1]: Created slice User Slice of UID 0.
Feb 19 19:57:39 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Feb 19 19:57:39 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Feb 19 19:57:39 compute-0 systemd[1]: Starting User Manager for UID 0...
Feb 19 19:57:39 compute-0 edpm-start-podman-container[98827]: Creating additional drop-in dependency for "ovn_controller" (626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0)
Feb 19 19:57:39 compute-0 systemd[98880]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Feb 19 19:57:39 compute-0 podman[98849]: 2026-02-19 19:57:39.980330544 +0000 UTC m=+0.060776631 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 19:57:39 compute-0 systemd[1]: 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0-7c3e6483c93a68f3.service: Main process exited, code=exited, status=1/FAILURE
Feb 19 19:57:39 compute-0 systemd[1]: 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0-7c3e6483c93a68f3.service: Failed with result 'exit-code'.
Feb 19 19:57:39 compute-0 systemd[1]: Reloading.
Feb 19 19:57:40 compute-0 systemd-rc-local-generator[98925]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 19:57:40 compute-0 systemd-sysv-generator[98931]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 19:57:40 compute-0 systemd[98880]: Queued start job for default target Main User Target.
Feb 19 19:57:40 compute-0 systemd[98880]: Created slice User Application Slice.
Feb 19 19:57:40 compute-0 systemd[98880]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Feb 19 19:57:40 compute-0 systemd[98880]: Started Daily Cleanup of User's Temporary Directories.
Feb 19 19:57:40 compute-0 systemd[98880]: Reached target Paths.
Feb 19 19:57:40 compute-0 systemd[98880]: Reached target Timers.
Feb 19 19:57:40 compute-0 systemd[98880]: Starting D-Bus User Message Bus Socket...
Feb 19 19:57:40 compute-0 systemd[98880]: Starting Create User's Volatile Files and Directories...
Feb 19 19:57:40 compute-0 systemd[98880]: Finished Create User's Volatile Files and Directories.
Feb 19 19:57:40 compute-0 systemd[98880]: Listening on D-Bus User Message Bus Socket.
Feb 19 19:57:40 compute-0 systemd[98880]: Reached target Sockets.
Feb 19 19:57:40 compute-0 systemd[98880]: Reached target Basic System.
Feb 19 19:57:40 compute-0 systemd[98880]: Reached target Main User Target.
Feb 19 19:57:40 compute-0 systemd[98880]: Startup finished in 130ms.
Feb 19 19:57:40 compute-0 systemd[1]: Started User Manager for UID 0.
Feb 19 19:57:40 compute-0 systemd[1]: Started ovn_controller container.
Feb 19 19:57:40 compute-0 systemd[1]: Started Session c1 of User root.
Feb 19 19:57:40 compute-0 sudo[98777]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:40 compute-0 ovn_controller[98843]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 19 19:57:40 compute-0 ovn_controller[98843]: INFO:__main__:Validating config file
Feb 19 19:57:40 compute-0 ovn_controller[98843]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 19 19:57:40 compute-0 ovn_controller[98843]: INFO:__main__:Writing out command to execute
Feb 19 19:57:40 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Feb 19 19:57:40 compute-0 ovn_controller[98843]: ++ cat /run_command
Feb 19 19:57:40 compute-0 ovn_controller[98843]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Feb 19 19:57:40 compute-0 ovn_controller[98843]: + ARGS=
Feb 19 19:57:40 compute-0 ovn_controller[98843]: + sudo kolla_copy_cacerts
Feb 19 19:57:40 compute-0 systemd[1]: Started Session c2 of User root.
Feb 19 19:57:40 compute-0 ovn_controller[98843]: + [[ ! -n '' ]]
Feb 19 19:57:40 compute-0 ovn_controller[98843]: + . kolla_extend_start
Feb 19 19:57:40 compute-0 ovn_controller[98843]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Feb 19 19:57:40 compute-0 ovn_controller[98843]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Feb 19 19:57:40 compute-0 ovn_controller[98843]: + umask 0022
Feb 19 19:57:40 compute-0 ovn_controller[98843]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Feb 19 19:57:40 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Feb 19 19:57:40 compute-0 ovn_controller[98843]: 2026-02-19T19:57:40Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Feb 19 19:57:40 compute-0 ovn_controller[98843]: 2026-02-19T19:57:40Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Feb 19 19:57:40 compute-0 ovn_controller[98843]: 2026-02-19T19:57:40Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Feb 19 19:57:40 compute-0 ovn_controller[98843]: 2026-02-19T19:57:40Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Feb 19 19:57:40 compute-0 ovn_controller[98843]: 2026-02-19T19:57:40Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Feb 19 19:57:40 compute-0 ovn_controller[98843]: 2026-02-19T19:57:40Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Feb 19 19:57:40 compute-0 NetworkManager[57033]: <info>  [1771531060.3343] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Feb 19 19:57:40 compute-0 NetworkManager[57033]: <info>  [1771531060.3348] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 19 19:57:40 compute-0 NetworkManager[57033]: <warn>  [1771531060.3349] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 19 19:57:40 compute-0 NetworkManager[57033]: <info>  [1771531060.3356] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/15)
Feb 19 19:57:40 compute-0 NetworkManager[57033]: <info>  [1771531060.3361] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/16)
Feb 19 19:57:40 compute-0 NetworkManager[57033]: <info>  [1771531060.3364] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Feb 19 19:57:40 compute-0 kernel: br-int: entered promiscuous mode
Feb 19 19:57:40 compute-0 ovn_controller[98843]: 2026-02-19T19:57:40Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Feb 19 19:57:40 compute-0 ovn_controller[98843]: 2026-02-19T19:57:40Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb 19 19:57:40 compute-0 ovn_controller[98843]: 2026-02-19T19:57:40Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb 19 19:57:40 compute-0 ovn_controller[98843]: 2026-02-19T19:57:40Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Feb 19 19:57:40 compute-0 ovn_controller[98843]: 2026-02-19T19:57:40Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Feb 19 19:57:40 compute-0 ovn_controller[98843]: 2026-02-19T19:57:40Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Feb 19 19:57:40 compute-0 ovn_controller[98843]: 2026-02-19T19:57:40Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Feb 19 19:57:40 compute-0 ovn_controller[98843]: 2026-02-19T19:57:40Z|00014|main|INFO|OVS feature set changed, force recompute.
Feb 19 19:57:40 compute-0 ovn_controller[98843]: 2026-02-19T19:57:40Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb 19 19:57:40 compute-0 ovn_controller[98843]: 2026-02-19T19:57:40Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb 19 19:57:40 compute-0 ovn_controller[98843]: 2026-02-19T19:57:40Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb 19 19:57:40 compute-0 ovn_controller[98843]: 2026-02-19T19:57:40Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Feb 19 19:57:40 compute-0 ovn_controller[98843]: 2026-02-19T19:57:40Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Feb 19 19:57:40 compute-0 ovn_controller[98843]: 2026-02-19T19:57:40Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb 19 19:57:40 compute-0 ovn_controller[98843]: 2026-02-19T19:57:40Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Feb 19 19:57:40 compute-0 ovn_controller[98843]: 2026-02-19T19:57:40Z|00022|main|INFO|OVS feature set changed, force recompute.
Feb 19 19:57:40 compute-0 ovn_controller[98843]: 2026-02-19T19:57:40Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Feb 19 19:57:40 compute-0 ovn_controller[98843]: 2026-02-19T19:57:40Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Feb 19 19:57:40 compute-0 systemd-udevd[98988]: Network interface NamePolicy= disabled on kernel command line.
Feb 19 19:57:40 compute-0 ovn_controller[98843]: 2026-02-19T19:57:40Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb 19 19:57:40 compute-0 ovn_controller[98843]: 2026-02-19T19:57:40Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb 19 19:57:40 compute-0 ovn_controller[98843]: 2026-02-19T19:57:40Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb 19 19:57:40 compute-0 ovn_controller[98843]: 2026-02-19T19:57:40Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb 19 19:57:40 compute-0 ovn_controller[98843]: 2026-02-19T19:57:40Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb 19 19:57:40 compute-0 ovn_controller[98843]: 2026-02-19T19:57:40Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb 19 19:57:40 compute-0 NetworkManager[57033]: <info>  [1771531060.3726] manager: (ovn-4bdf93-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Feb 19 19:57:40 compute-0 systemd-udevd[98987]: Network interface NamePolicy= disabled on kernel command line.
Feb 19 19:57:40 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Feb 19 19:57:40 compute-0 NetworkManager[57033]: <info>  [1771531060.3878] device (genev_sys_6081): carrier: link connected
Feb 19 19:57:40 compute-0 NetworkManager[57033]: <info>  [1771531060.3883] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/18)
Feb 19 19:57:41 compute-0 python3.9[99117]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 19 19:57:41 compute-0 sudo[99267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kugqjhkmyrwzwbnwmfxlltzmhnbfzhub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531061.5827906-620-168987324253635/AnsiballZ_stat.py'
Feb 19 19:57:41 compute-0 sudo[99267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:42 compute-0 python3.9[99270]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:57:42 compute-0 sudo[99267]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:42 compute-0 sudo[99391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgtwltindnhgehwvzjgkrqsctwamackt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531061.5827906-620-168987324253635/AnsiballZ_copy.py'
Feb 19 19:57:42 compute-0 sudo[99391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:42 compute-0 python3.9[99394]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771531061.5827906-620-168987324253635/.source.yaml _original_basename=.jp5g06td follow=False checksum=50ccbc3a4c79331627ebac2a4f3b6bf3ba73d273 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:57:42 compute-0 sudo[99391]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:42 compute-0 sudo[99544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqvbbyxqinzqfcmpcqhldlyxkrykjwzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531062.6438065-635-21765167461066/AnsiballZ_command.py'
Feb 19 19:57:42 compute-0 sudo[99544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:43 compute-0 python3.9[99547]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:57:43 compute-0 ovs-vsctl[99548]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Feb 19 19:57:43 compute-0 sudo[99544]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:43 compute-0 sudo[99698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exhdzikkhkohdodlnqinrgygwdixzoog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531063.2974417-643-63002995367678/AnsiballZ_command.py'
Feb 19 19:57:43 compute-0 sudo[99698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:43 compute-0 python3.9[99701]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:57:43 compute-0 ovs-vsctl[99703]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Feb 19 19:57:43 compute-0 sudo[99698]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:44 compute-0 sudo[99854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twriqjmyeqxchndvaypnvphrgycywhra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531064.0274193-657-56301729551718/AnsiballZ_command.py'
Feb 19 19:57:44 compute-0 sudo[99854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:44 compute-0 python3.9[99857]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:57:44 compute-0 ovs-vsctl[99858]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Feb 19 19:57:44 compute-0 sudo[99854]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:44 compute-0 sshd-session[88267]: Connection closed by 192.168.122.30 port 53570
Feb 19 19:57:44 compute-0 sshd-session[88264]: pam_unix(sshd:session): session closed for user zuul
Feb 19 19:57:44 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Feb 19 19:57:44 compute-0 systemd[1]: session-19.scope: Consumed 39.365s CPU time.
Feb 19 19:57:44 compute-0 systemd-logind[810]: Session 19 logged out. Waiting for processes to exit.
Feb 19 19:57:44 compute-0 systemd-logind[810]: Removed session 19.
Feb 19 19:57:50 compute-0 sshd-session[99883]: Accepted publickey for zuul from 192.168.122.30 port 40476 ssh2: ECDSA SHA256:U7+XUhHIIKxaxeCtrtx4n7poU9CMVA2TmDaaiHbw4x0
Feb 19 19:57:50 compute-0 systemd-logind[810]: New session 21 of user zuul.
Feb 19 19:57:50 compute-0 systemd[1]: Started Session 21 of User zuul.
Feb 19 19:57:50 compute-0 sshd-session[99883]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 19 19:57:50 compute-0 systemd[1]: Stopping User Manager for UID 0...
Feb 19 19:57:50 compute-0 systemd[98880]: Activating special unit Exit the Session...
Feb 19 19:57:50 compute-0 systemd[98880]: Stopped target Main User Target.
Feb 19 19:57:50 compute-0 systemd[98880]: Stopped target Basic System.
Feb 19 19:57:50 compute-0 systemd[98880]: Stopped target Paths.
Feb 19 19:57:50 compute-0 systemd[98880]: Stopped target Sockets.
Feb 19 19:57:50 compute-0 systemd[98880]: Stopped target Timers.
Feb 19 19:57:50 compute-0 systemd[98880]: Stopped Daily Cleanup of User's Temporary Directories.
Feb 19 19:57:50 compute-0 systemd[98880]: Closed D-Bus User Message Bus Socket.
Feb 19 19:57:50 compute-0 systemd[98880]: Stopped Create User's Volatile Files and Directories.
Feb 19 19:57:50 compute-0 systemd[98880]: Removed slice User Application Slice.
Feb 19 19:57:50 compute-0 systemd[98880]: Reached target Shutdown.
Feb 19 19:57:50 compute-0 systemd[98880]: Finished Exit the Session.
Feb 19 19:57:50 compute-0 systemd[98880]: Reached target Exit the Session.
Feb 19 19:57:50 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Feb 19 19:57:50 compute-0 systemd[1]: Stopped User Manager for UID 0.
Feb 19 19:57:50 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Feb 19 19:57:50 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Feb 19 19:57:50 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Feb 19 19:57:50 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Feb 19 19:57:50 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Feb 19 19:57:51 compute-0 python3.9[100038]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 19 19:57:51 compute-0 sudo[100192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyhqvcgizjkmphlakccbqkzjmkdpqlgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531071.4890811-29-244052914012397/AnsiballZ_file.py'
Feb 19 19:57:51 compute-0 sudo[100192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:52 compute-0 python3.9[100195]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:57:52 compute-0 sudo[100192]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:52 compute-0 sudo[100345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgrvvgyrbzluwgkinprrddgjkgzggmwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531072.2209914-29-15428427538918/AnsiballZ_file.py'
Feb 19 19:57:52 compute-0 sudo[100345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:52 compute-0 python3.9[100348]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:57:52 compute-0 sudo[100345]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:52 compute-0 sudo[100498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmkwyurukxfhzirqvfbfzhdmcsjvuncp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531072.7268531-29-154615512234753/AnsiballZ_file.py'
Feb 19 19:57:52 compute-0 sudo[100498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:53 compute-0 python3.9[100501]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:57:53 compute-0 sudo[100498]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:53 compute-0 sudo[100651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uckplaxswejhuaqalpuzimpnkvkgoyhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531073.2098641-29-268503313789152/AnsiballZ_file.py'
Feb 19 19:57:53 compute-0 sudo[100651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:53 compute-0 python3.9[100654]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:57:53 compute-0 sudo[100651]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:54 compute-0 sudo[100804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqeweslhalhzvwqztiyuzapmsstdkiso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531073.8689234-29-151303855683145/AnsiballZ_file.py'
Feb 19 19:57:54 compute-0 sudo[100804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:54 compute-0 python3.9[100807]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:57:54 compute-0 sudo[100804]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:54 compute-0 python3.9[100957]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 19 19:57:55 compute-0 sudo[101107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xchielgxrsnwmyugmelskcneockbsecm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531075.0458694-73-146300083469042/AnsiballZ_seboolean.py'
Feb 19 19:57:55 compute-0 sudo[101107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:55 compute-0 python3.9[101110]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Feb 19 19:57:56 compute-0 sudo[101107]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:56 compute-0 python3.9[101260]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:57:57 compute-0 python3.9[101381]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771531076.2355947-81-274167009318866/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:57:57 compute-0 python3.9[101531]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:57:58 compute-0 python3.9[101652]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771531077.5157132-96-123127321302626/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:57:58 compute-0 sudo[101803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mphzpdchrnwqnoiqefypkkcyxcwwsleh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531078.576302-113-231624334295133/AnsiballZ_setup.py'
Feb 19 19:57:58 compute-0 sudo[101803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:59 compute-0 python3.9[101806]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 19 19:57:59 compute-0 sudo[101803]: pam_unix(sudo:session): session closed for user root
Feb 19 19:57:59 compute-0 sudo[101888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwdkamdbtkwvaeathzjxuhsrapuxcxdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531078.576302-113-231624334295133/AnsiballZ_dnf.py'
Feb 19 19:57:59 compute-0 sudo[101888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:57:59 compute-0 python3.9[101891]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 19 19:58:00 compute-0 sudo[101888]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:01 compute-0 sudo[102042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lylludwugayqwemoyhgltjcqeofbeuab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531081.083791-125-53859818823419/AnsiballZ_systemd.py'
Feb 19 19:58:01 compute-0 sudo[102042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:01 compute-0 python3.9[102045]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 19 19:58:02 compute-0 sudo[102042]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:02 compute-0 python3.9[102198]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:58:03 compute-0 python3.9[102319]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771531082.1477308-133-131715527185467/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:58:03 compute-0 python3.9[102469]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:58:04 compute-0 python3.9[102590]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771531083.2941833-133-180637865022411/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:58:05 compute-0 python3.9[102740]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:58:05 compute-0 python3.9[102863]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771531084.819118-177-14254387973550/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:58:06 compute-0 python3.9[103013]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:58:06 compute-0 sshd-session[102788]: Received disconnect from 158.174.210.161 port 25141:11: Bye Bye [preauth]
Feb 19 19:58:06 compute-0 sshd-session[102788]: Disconnected from authenticating user root 158.174.210.161 port 25141 [preauth]
Feb 19 19:58:06 compute-0 python3.9[103134]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771531085.8136733-177-280727827517716/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:58:07 compute-0 python3.9[103284]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 19:58:07 compute-0 sudo[103436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwvqtbnbgegvnppgpuicwdrupkytejfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531087.413205-215-68101841750871/AnsiballZ_file.py'
Feb 19 19:58:07 compute-0 sudo[103436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:07 compute-0 python3.9[103439]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:58:07 compute-0 sudo[103436]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:08 compute-0 sudo[103589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vstrqeesnktnicevyraudgyxqhiznlme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531087.9254315-223-146438191741882/AnsiballZ_stat.py'
Feb 19 19:58:08 compute-0 sudo[103589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:08 compute-0 python3.9[103592]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:58:08 compute-0 sudo[103589]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:08 compute-0 sudo[103668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uagmxqkikdgrxqhkareczgdnbuguviuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531087.9254315-223-146438191741882/AnsiballZ_file.py'
Feb 19 19:58:08 compute-0 sudo[103668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:08 compute-0 python3.9[103671]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:58:08 compute-0 sudo[103668]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:09 compute-0 sudo[103821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxiaxhgdhhkrwyytrvyuxmfyvbbkmnol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531088.9220839-223-171778422600353/AnsiballZ_stat.py'
Feb 19 19:58:09 compute-0 sudo[103821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:09 compute-0 python3.9[103824]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:58:09 compute-0 sudo[103821]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:09 compute-0 sudo[103900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtzmwxmsmwuhdcqpelpyqrjntrwdcoan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531088.9220839-223-171778422600353/AnsiballZ_file.py'
Feb 19 19:58:09 compute-0 sudo[103900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:09 compute-0 python3.9[103903]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:58:09 compute-0 sudo[103900]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:10 compute-0 sudo[104066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gclqameixdvdskmejdoffxcaldmrmfei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531089.9355114-246-72333736863180/AnsiballZ_file.py'
Feb 19 19:58:10 compute-0 sudo[104066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:10 compute-0 ovn_controller[98843]: 2026-02-19T19:58:10Z|00025|memory|INFO|17408 kB peak resident set size after 29.9 seconds
Feb 19 19:58:10 compute-0 ovn_controller[98843]: 2026-02-19T19:58:10Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:471 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Feb 19 19:58:10 compute-0 podman[104027]: 2026-02-19 19:58:10.209404794 +0000 UTC m=+0.086566232 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 19:58:10 compute-0 python3.9[104073]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:58:10 compute-0 sudo[104066]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:10 compute-0 sudo[104232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxewnuntqwnpomyrzddcfozinnopgxxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531090.527109-254-271315532946023/AnsiballZ_stat.py'
Feb 19 19:58:10 compute-0 sudo[104232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:10 compute-0 python3.9[104235]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:58:10 compute-0 sudo[104232]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:11 compute-0 sudo[104311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxzpxiwbhhrecxxykaqoyzqezvsxhnoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531090.527109-254-271315532946023/AnsiballZ_file.py'
Feb 19 19:58:11 compute-0 sudo[104311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:11 compute-0 python3.9[104314]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:58:11 compute-0 sudo[104311]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:11 compute-0 sudo[104464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cakvwukiykgkrydjvqhnwdqlbtnjjaot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531091.5393155-266-240774278757532/AnsiballZ_stat.py'
Feb 19 19:58:11 compute-0 sudo[104464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:11 compute-0 python3.9[104467]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:58:11 compute-0 sudo[104464]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:12 compute-0 sudo[104543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcpunhklhsvprupnxmibqetzmesftall ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531091.5393155-266-240774278757532/AnsiballZ_file.py'
Feb 19 19:58:12 compute-0 sudo[104543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:12 compute-0 python3.9[104546]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:58:12 compute-0 sudo[104543]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:12 compute-0 sudo[104696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmzsqamydopmdzdpftrpsxolrmfpobet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531092.5379398-278-7643883350539/AnsiballZ_systemd.py'
Feb 19 19:58:12 compute-0 sudo[104696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:13 compute-0 python3.9[104699]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 19:58:13 compute-0 systemd[1]: Reloading.
Feb 19 19:58:13 compute-0 systemd-sysv-generator[104728]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 19:58:13 compute-0 systemd-rc-local-generator[104720]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 19:58:13 compute-0 sudo[104696]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:13 compute-0 sudo[104893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qewwthdvpqwsjtwqoijhslmfzlhrdbqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531093.4282918-286-94884083537290/AnsiballZ_stat.py'
Feb 19 19:58:13 compute-0 sudo[104893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:13 compute-0 python3.9[104896]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:58:13 compute-0 sudo[104893]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:14 compute-0 sudo[104972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtnhtoptfabqhroayksgjtyabbovazpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531093.4282918-286-94884083537290/AnsiballZ_file.py'
Feb 19 19:58:14 compute-0 sudo[104972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:14 compute-0 python3.9[104975]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:58:14 compute-0 sudo[104972]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:14 compute-0 sudo[105125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgojqiokzmsklolnpctvqakjmbwiteiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531094.4683268-298-7454954689836/AnsiballZ_stat.py'
Feb 19 19:58:14 compute-0 sudo[105125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:14 compute-0 python3.9[105128]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:58:14 compute-0 sudo[105125]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:15 compute-0 sudo[105204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtlgeldbeyxzgivlnuhgzzhjbywuxzdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531094.4683268-298-7454954689836/AnsiballZ_file.py'
Feb 19 19:58:15 compute-0 sudo[105204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:15 compute-0 python3.9[105207]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:58:15 compute-0 sudo[105204]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:15 compute-0 sudo[105357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brswfwdifkbhtirqhzdxeylycgebwjkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531095.4482605-310-92621836688165/AnsiballZ_systemd.py'
Feb 19 19:58:15 compute-0 sudo[105357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:15 compute-0 python3.9[105360]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 19:58:15 compute-0 systemd[1]: Reloading.
Feb 19 19:58:16 compute-0 systemd-sysv-generator[105392]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 19:58:16 compute-0 systemd-rc-local-generator[105389]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 19:58:16 compute-0 systemd[1]: Starting Create netns directory...
Feb 19 19:58:16 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb 19 19:58:16 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb 19 19:58:16 compute-0 systemd[1]: Finished Create netns directory.
Feb 19 19:58:16 compute-0 sudo[105357]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:16 compute-0 sudo[105558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvnehkegdooqlymvvxibjtvijbrghixo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531096.5140731-320-209173204590748/AnsiballZ_file.py'
Feb 19 19:58:16 compute-0 sudo[105558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:16 compute-0 python3.9[105561]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:58:16 compute-0 sudo[105558]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:17 compute-0 sudo[105711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-didfbgczmsygpceccaupsaihedbdpkmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531097.141433-328-167313526518443/AnsiballZ_stat.py'
Feb 19 19:58:17 compute-0 sudo[105711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:17 compute-0 python3.9[105714]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:58:17 compute-0 sudo[105711]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:17 compute-0 sudo[105835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-noqeqsvegkasugihdytroikmwcgizwis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531097.141433-328-167313526518443/AnsiballZ_copy.py'
Feb 19 19:58:17 compute-0 sudo[105835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:18 compute-0 python3.9[105838]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771531097.141433-328-167313526518443/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:58:18 compute-0 sudo[105835]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:18 compute-0 sudo[105988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iczjnqiqbcmrqugnfbxiqwlvhfwxiwpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531098.4280965-345-186163077698434/AnsiballZ_file.py'
Feb 19 19:58:18 compute-0 sudo[105988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:18 compute-0 python3.9[105991]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:58:18 compute-0 sudo[105988]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:19 compute-0 sudo[106141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owwipmunajxozxqwvnkukajhzrbpkvjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531098.9751456-353-50070752181746/AnsiballZ_file.py'
Feb 19 19:58:19 compute-0 sudo[106141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:19 compute-0 python3.9[106144]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 19 19:58:19 compute-0 sudo[106141]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:19 compute-0 sudo[106294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogitqrzxcvdwfzaikdprduxbnjhxetfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531099.5695357-361-95696814141905/AnsiballZ_stat.py'
Feb 19 19:58:19 compute-0 sudo[106294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:19 compute-0 python3.9[106297]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:58:19 compute-0 sudo[106294]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:20 compute-0 sudo[106418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upuozgpgavorcjpzgojhlkxxvxlqfjfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531099.5695357-361-95696814141905/AnsiballZ_copy.py'
Feb 19 19:58:20 compute-0 sudo[106418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:20 compute-0 python3.9[106421]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1771531099.5695357-361-95696814141905/.source.json _original_basename=.fy4kd1a8 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:58:20 compute-0 sudo[106418]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:20 compute-0 python3.9[106571]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:58:22 compute-0 sudo[106992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlysasrebgahqkaosttisrwgqjhmmxls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531102.2268722-401-96906013630080/AnsiballZ_container_config_data.py'
Feb 19 19:58:22 compute-0 sudo[106992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:22 compute-0 python3.9[106995]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Feb 19 19:58:22 compute-0 sudo[106992]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:23 compute-0 sudo[107145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enxdvmppohtusgchdqntitlwdsjesyyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531103.0458014-412-27582174239734/AnsiballZ_container_config_hash.py'
Feb 19 19:58:23 compute-0 sudo[107145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:23 compute-0 python3.9[107148]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 19 19:58:23 compute-0 sudo[107145]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:24 compute-0 sudo[107298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jltrtxueqblszxgzeesalvqcndjwicsy ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771531103.879674-422-20926009477865/AnsiballZ_edpm_container_manage.py'
Feb 19 19:58:24 compute-0 sudo[107298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:24 compute-0 python3[107301]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Feb 19 19:58:24 compute-0 podman[107335]: 2026-02-19 19:58:24.743540969 +0000 UTC m=+0.060586581 container create 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb 19 19:58:24 compute-0 podman[107335]: 2026-02-19 19:58:24.714695272 +0000 UTC m=+0.031740964 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 19 19:58:24 compute-0 python3[107301]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 19 19:58:24 compute-0 sudo[107298]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:25 compute-0 sudo[107521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpdmyrrwqjwmqurgqsetjnewujjnqtwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531104.9947305-430-244124429658356/AnsiballZ_stat.py'
Feb 19 19:58:25 compute-0 sudo[107521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:25 compute-0 python3.9[107524]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 19:58:25 compute-0 sudo[107521]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:25 compute-0 sudo[107676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqidefeusgmukbphpdxnicllevdckgqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531105.640579-439-251224746177014/AnsiballZ_file.py'
Feb 19 19:58:25 compute-0 sudo[107676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:26 compute-0 python3.9[107679]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:58:26 compute-0 sudo[107676]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:26 compute-0 sudo[107753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apwrlmydibwsvqeogcutbvqytkftdnda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531105.640579-439-251224746177014/AnsiballZ_stat.py'
Feb 19 19:58:26 compute-0 sudo[107753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:26 compute-0 python3.9[107756]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 19:58:26 compute-0 sudo[107753]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:26 compute-0 sudo[107905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ooeekzebhswpfgenepmbbszyzffcizzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531106.4674428-439-115250243059503/AnsiballZ_copy.py'
Feb 19 19:58:26 compute-0 sudo[107905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:27 compute-0 python3.9[107908]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771531106.4674428-439-115250243059503/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:58:27 compute-0 sudo[107905]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:27 compute-0 sudo[107982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twsgnutchqzjbpsqykbvaftxqplibtie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531106.4674428-439-115250243059503/AnsiballZ_systemd.py'
Feb 19 19:58:27 compute-0 sudo[107982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:27 compute-0 python3.9[107985]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 19 19:58:27 compute-0 systemd[1]: Reloading.
Feb 19 19:58:27 compute-0 systemd-sysv-generator[108017]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 19:58:27 compute-0 systemd-rc-local-generator[108009]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 19:58:27 compute-0 sudo[107982]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:27 compute-0 sudo[108101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybbpodrxeowndflyegwghszdijlnfmru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531106.4674428-439-115250243059503/AnsiballZ_systemd.py'
Feb 19 19:58:27 compute-0 sudo[108101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:28 compute-0 python3.9[108104]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 19:58:28 compute-0 systemd[1]: Reloading.
Feb 19 19:58:28 compute-0 systemd-rc-local-generator[108128]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 19:58:28 compute-0 systemd-sysv-generator[108133]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 19:58:28 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Feb 19 19:58:28 compute-0 systemd[1]: Started libcrun container.
Feb 19 19:58:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30d817ad9f709865677131a3e7e341f79dc425c9d0978caec58be7bdd67710c5/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Feb 19 19:58:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30d817ad9f709865677131a3e7e341f79dc425c9d0978caec58be7bdd67710c5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 19 19:58:28 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e.
Feb 19 19:58:28 compute-0 podman[108153]: 2026-02-19 19:58:28.625752739 +0000 UTC m=+0.180420037 container init 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 19:58:28 compute-0 ovn_metadata_agent[108170]: + sudo -E kolla_set_configs
Feb 19 19:58:28 compute-0 podman[108153]: 2026-02-19 19:58:28.653585702 +0000 UTC m=+0.208252900 container start 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Feb 19 19:58:28 compute-0 edpm-start-podman-container[108153]: ovn_metadata_agent
Feb 19 19:58:28 compute-0 edpm-start-podman-container[108152]: Creating additional drop-in dependency for "ovn_metadata_agent" (59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e)
Feb 19 19:58:28 compute-0 ovn_metadata_agent[108170]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 19 19:58:28 compute-0 ovn_metadata_agent[108170]: INFO:__main__:Validating config file
Feb 19 19:58:28 compute-0 ovn_metadata_agent[108170]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 19 19:58:28 compute-0 ovn_metadata_agent[108170]: INFO:__main__:Copying service configuration files
Feb 19 19:58:28 compute-0 ovn_metadata_agent[108170]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Feb 19 19:58:28 compute-0 ovn_metadata_agent[108170]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Feb 19 19:58:28 compute-0 ovn_metadata_agent[108170]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Feb 19 19:58:28 compute-0 ovn_metadata_agent[108170]: INFO:__main__:Writing out command to execute
Feb 19 19:58:28 compute-0 ovn_metadata_agent[108170]: INFO:__main__:Setting permission for /var/lib/neutron
Feb 19 19:58:28 compute-0 ovn_metadata_agent[108170]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Feb 19 19:58:28 compute-0 ovn_metadata_agent[108170]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Feb 19 19:58:28 compute-0 ovn_metadata_agent[108170]: INFO:__main__:Setting permission for /var/lib/neutron/external
Feb 19 19:58:28 compute-0 ovn_metadata_agent[108170]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Feb 19 19:58:28 compute-0 ovn_metadata_agent[108170]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Feb 19 19:58:28 compute-0 ovn_metadata_agent[108170]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Feb 19 19:58:28 compute-0 ovn_metadata_agent[108170]: ++ cat /run_command
Feb 19 19:58:28 compute-0 systemd[1]: Reloading.
Feb 19 19:58:28 compute-0 ovn_metadata_agent[108170]: + CMD=neutron-ovn-metadata-agent
Feb 19 19:58:28 compute-0 ovn_metadata_agent[108170]: + ARGS=
Feb 19 19:58:28 compute-0 ovn_metadata_agent[108170]: + sudo kolla_copy_cacerts
Feb 19 19:58:28 compute-0 ovn_metadata_agent[108170]: + [[ ! -n '' ]]
Feb 19 19:58:28 compute-0 ovn_metadata_agent[108170]: + . kolla_extend_start
Feb 19 19:58:28 compute-0 ovn_metadata_agent[108170]: Running command: 'neutron-ovn-metadata-agent'
Feb 19 19:58:28 compute-0 ovn_metadata_agent[108170]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Feb 19 19:58:28 compute-0 ovn_metadata_agent[108170]: + umask 0022
Feb 19 19:58:28 compute-0 ovn_metadata_agent[108170]: + exec neutron-ovn-metadata-agent
Feb 19 19:58:28 compute-0 podman[108177]: 2026-02-19 19:58:28.725949918 +0000 UTC m=+0.065706514 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127)
Feb 19 19:58:28 compute-0 systemd-rc-local-generator[108246]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 19:58:28 compute-0 systemd-sysv-generator[108250]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 19:58:28 compute-0 systemd[1]: Started ovn_metadata_agent container.
Feb 19 19:58:28 compute-0 sudo[108101]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:29 compute-0 python3.9[108413]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 19 19:58:30 compute-0 sudo[108563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lspyzbfyikaenymfyknyifbhvjhyzpea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531110.0727482-484-13230944136049/AnsiballZ_stat.py'
Feb 19 19:58:30 compute-0 sudo[108563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.366 108175 INFO neutron.common.config [-] Logging enabled!
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.366 108175 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev44
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.366 108175 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.366 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.367 108175 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.367 108175 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.367 108175 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.367 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.367 108175 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.367 108175 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.367 108175 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.367 108175 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.368 108175 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.368 108175 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.368 108175 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.368 108175 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.368 108175 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.368 108175 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.368 108175 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.368 108175 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.368 108175 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.369 108175 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.369 108175 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.369 108175 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.369 108175 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.369 108175 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.369 108175 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.369 108175 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.369 108175 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.369 108175 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.369 108175 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.370 108175 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.370 108175 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.370 108175 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.370 108175 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.370 108175 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.370 108175 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.370 108175 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.370 108175 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.370 108175 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.371 108175 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.371 108175 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.371 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.371 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.371 108175 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.371 108175 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.371 108175 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.371 108175 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.371 108175 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.371 108175 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.371 108175 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.372 108175 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.372 108175 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.372 108175 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.372 108175 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.372 108175 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.372 108175 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.372 108175 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.372 108175 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.372 108175 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.373 108175 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.373 108175 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.373 108175 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.373 108175 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.373 108175 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.373 108175 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.373 108175 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.373 108175 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.373 108175 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.373 108175 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.374 108175 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.374 108175 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.374 108175 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.374 108175 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.374 108175 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.374 108175 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.374 108175 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.375 108175 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.375 108175 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.375 108175 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.375 108175 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.375 108175 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.375 108175 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.375 108175 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.376 108175 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.376 108175 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.376 108175 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.376 108175 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.376 108175 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.376 108175 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.376 108175 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.376 108175 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.376 108175 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.377 108175 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.377 108175 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.377 108175 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.377 108175 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.377 108175 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.377 108175 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.377 108175 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.377 108175 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.377 108175 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.377 108175 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.378 108175 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.378 108175 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.378 108175 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.378 108175 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.378 108175 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.378 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.378 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.378 108175 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.379 108175 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.379 108175 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.379 108175 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.379 108175 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.379 108175 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.379 108175 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.379 108175 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.379 108175 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.379 108175 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.380 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.380 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.380 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.380 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.380 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.380 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.380 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.381 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.381 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.381 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.381 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.381 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.381 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.381 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.381 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.381 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.382 108175 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.382 108175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.382 108175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.382 108175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.382 108175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.382 108175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.382 108175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.383 108175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.383 108175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.383 108175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.383 108175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.383 108175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.383 108175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.383 108175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.383 108175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.383 108175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.384 108175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.384 108175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.384 108175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.384 108175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.384 108175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.384 108175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.384 108175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.384 108175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.384 108175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.384 108175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.385 108175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.385 108175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.385 108175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.385 108175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.385 108175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.385 108175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.385 108175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.385 108175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.386 108175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.386 108175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.386 108175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.386 108175 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.386 108175 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.386 108175 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.386 108175 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.386 108175 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.386 108175 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.386 108175 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.387 108175 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.387 108175 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.387 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.387 108175 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.387 108175 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.387 108175 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.387 108175 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.387 108175 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.387 108175 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.388 108175 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.388 108175 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.388 108175 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.388 108175 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.388 108175 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.388 108175 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.388 108175 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.388 108175 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.388 108175 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.388 108175 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.389 108175 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.389 108175 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.389 108175 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.389 108175 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.389 108175 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.389 108175 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.389 108175 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.389 108175 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.390 108175 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.390 108175 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.390 108175 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.390 108175 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.390 108175 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.390 108175 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.390 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.391 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.391 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.391 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.391 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.391 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.391 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.391 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.392 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.392 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.392 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.392 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.392 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.392 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.392 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.392 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.392 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.392 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.393 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.393 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.393 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.393 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.393 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.393 108175 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.393 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.393 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.393 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.394 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.394 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.394 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.394 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.394 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.394 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.394 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.394 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.394 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.395 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.395 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.395 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.395 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.395 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.395 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.395 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.395 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.395 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.395 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.396 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.396 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.396 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.396 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.396 108175 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.396 108175 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.396 108175 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.396 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.396 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.397 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.397 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.397 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.397 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.397 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.397 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.397 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.397 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.397 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.398 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.398 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.398 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.398 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.398 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.398 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.398 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.398 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.398 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.398 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.399 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.399 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.399 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.399 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.399 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.399 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.399 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.399 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.399 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.400 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.400 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.400 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.400 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.400 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.400 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.400 108175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.401 108175 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.415 108175 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.416 108175 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.416 108175 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.416 108175 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.417 108175 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.430 108175 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name e2fe6bb6-fad0-4563-8388-215a30f03e3f (UUID: e2fe6bb6-fad0-4563-8388-215a30f03e3f) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.456 108175 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.456 108175 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.456 108175 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.456 108175 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.460 108175 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.465 108175 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.471 108175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'e2fe6bb6-fad0-4563-8388-215a30f03e3f'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>], external_ids={}, name=e2fe6bb6-fad0-4563-8388-215a30f03e3f, nb_cfg_timestamp=1771531068368, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.472 108175 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fc014bf2130>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.473 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.473 108175 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.474 108175 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.474 108175 INFO oslo_service.service [-] Starting 1 workers
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.479 108175 DEBUG oslo_service.service [-] Started child 108567 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.482 108567 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-368187'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.482 108175 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp5lo0jqs3/privsep.sock']
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.502 108567 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.503 108567 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.503 108567 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.506 108567 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.511 108567 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Feb 19 19:58:30 compute-0 python3.9[108566]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 19:58:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.518 108567 INFO eventlet.wsgi.server [-] (108567) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Feb 19 19:58:30 compute-0 sudo[108563]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:30 compute-0 sudo[108694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wozcmueawgvcrwevkzpafpkrmjnpvzvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531110.0727482-484-13230944136049/AnsiballZ_copy.py'
Feb 19 19:58:30 compute-0 sudo[108694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:30 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Feb 19 19:58:31 compute-0 python3.9[108697]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771531110.0727482-484-13230944136049/.source.yaml _original_basename=.v5k06_kf follow=False checksum=6e2be7b47a6e7fb0f55392fd46da26536fcd59b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:58:31 compute-0 sudo[108694]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:31.087 108175 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Feb 19 19:58:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:31.088 108175 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp5lo0jqs3/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Feb 19 19:58:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.972 108698 INFO oslo.privsep.daemon [-] privsep daemon starting
Feb 19 19:58:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.975 108698 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Feb 19 19:58:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.977 108698 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Feb 19 19:58:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:30.977 108698 INFO oslo.privsep.daemon [-] privsep daemon running as pid 108698
Feb 19 19:58:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:31.090 108698 DEBUG oslo.privsep.daemon [-] privsep: reply[b96de89b-8be0-496a-aa59-b7c5d6035599]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 19:58:31 compute-0 sshd-session[99886]: Connection closed by 192.168.122.30 port 40476
Feb 19 19:58:31 compute-0 sshd-session[99883]: pam_unix(sshd:session): session closed for user zuul
Feb 19 19:58:31 compute-0 systemd[1]: session-21.scope: Deactivated successfully.
Feb 19 19:58:31 compute-0 systemd[1]: session-21.scope: Consumed 29.786s CPU time.
Feb 19 19:58:31 compute-0 systemd-logind[810]: Session 21 logged out. Waiting for processes to exit.
Feb 19 19:58:31 compute-0 systemd-logind[810]: Removed session 21.
Feb 19 19:58:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:31.546 108698 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 19:58:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:31.546 108698 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 19:58:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:31.546 108698 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.028 108698 DEBUG oslo.privsep.daemon [-] privsep: reply[28473cbe-d42f-445c-9632-f0edb05dd3dd]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.030 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=e2fe6bb6-fad0-4563-8388-215a30f03e3f, column=external_ids, values=({'neutron:ovn-metadata-id': '7433f001-2f8b-5882-9cbc-228a50029aa7'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.039 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e2fe6bb6-fad0-4563-8388-215a30f03e3f, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.044 108175 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.044 108175 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.044 108175 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.045 108175 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.045 108175 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.045 108175 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.045 108175 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.045 108175 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.045 108175 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.046 108175 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.046 108175 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.046 108175 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.046 108175 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.046 108175 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.046 108175 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.047 108175 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.047 108175 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.047 108175 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.047 108175 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.047 108175 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.047 108175 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.047 108175 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.048 108175 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.048 108175 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.048 108175 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.048 108175 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.048 108175 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.048 108175 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.049 108175 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.049 108175 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.049 108175 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.049 108175 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.049 108175 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.049 108175 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.050 108175 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.050 108175 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.050 108175 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.050 108175 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.050 108175 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.050 108175 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.051 108175 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.051 108175 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.051 108175 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.051 108175 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.051 108175 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.051 108175 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.051 108175 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.052 108175 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.052 108175 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.052 108175 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.052 108175 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.052 108175 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.052 108175 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.052 108175 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.053 108175 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.053 108175 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.053 108175 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.053 108175 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.053 108175 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.053 108175 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.053 108175 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.054 108175 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.054 108175 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.054 108175 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.054 108175 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.054 108175 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.054 108175 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.054 108175 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.055 108175 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.055 108175 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.055 108175 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.055 108175 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.055 108175 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.055 108175 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.055 108175 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.056 108175 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.056 108175 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.056 108175 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.056 108175 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.056 108175 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.056 108175 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.057 108175 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.057 108175 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.057 108175 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.057 108175 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.057 108175 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.057 108175 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.057 108175 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.058 108175 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.058 108175 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.058 108175 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.058 108175 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.058 108175 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.058 108175 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.058 108175 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.059 108175 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.059 108175 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.059 108175 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.059 108175 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.059 108175 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.059 108175 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.059 108175 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.060 108175 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.060 108175 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.060 108175 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.060 108175 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.060 108175 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.060 108175 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.060 108175 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.061 108175 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.061 108175 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.061 108175 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.061 108175 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.061 108175 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.061 108175 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.062 108175 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.062 108175 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.062 108175 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.062 108175 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.062 108175 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.062 108175 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.063 108175 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.063 108175 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.063 108175 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.063 108175 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.063 108175 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.063 108175 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.063 108175 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.064 108175 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.064 108175 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.064 108175 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.064 108175 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.064 108175 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.064 108175 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.064 108175 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.065 108175 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.065 108175 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.065 108175 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.065 108175 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.065 108175 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.065 108175 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.066 108175 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.066 108175 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.066 108175 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.066 108175 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.066 108175 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.066 108175 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.066 108175 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.067 108175 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.067 108175 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.067 108175 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.067 108175 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.067 108175 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.067 108175 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.067 108175 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.068 108175 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.068 108175 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.068 108175 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.068 108175 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.068 108175 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.068 108175 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.068 108175 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.069 108175 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.069 108175 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.069 108175 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.069 108175 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.069 108175 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.069 108175 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.069 108175 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.070 108175 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.070 108175 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.070 108175 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.070 108175 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.070 108175 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.070 108175 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.070 108175 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.071 108175 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.071 108175 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.071 108175 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.071 108175 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.071 108175 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.071 108175 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.071 108175 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.072 108175 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.072 108175 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.072 108175 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.072 108175 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.072 108175 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.072 108175 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.072 108175 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.073 108175 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.073 108175 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.073 108175 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.073 108175 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.073 108175 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.073 108175 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.074 108175 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.074 108175 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.074 108175 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.074 108175 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.074 108175 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.074 108175 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.074 108175 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.075 108175 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.075 108175 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.075 108175 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.075 108175 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.075 108175 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.075 108175 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.075 108175 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.076 108175 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.076 108175 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.076 108175 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.076 108175 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.076 108175 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.076 108175 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.076 108175 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.077 108175 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.077 108175 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.077 108175 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.077 108175 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.077 108175 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.077 108175 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.077 108175 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.077 108175 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.078 108175 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.078 108175 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.078 108175 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.078 108175 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.078 108175 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.078 108175 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.078 108175 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.079 108175 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.079 108175 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.079 108175 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.079 108175 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.079 108175 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.079 108175 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.079 108175 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.080 108175 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.080 108175 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.080 108175 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.080 108175 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.080 108175 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.080 108175 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.081 108175 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.081 108175 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.081 108175 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.081 108175 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.081 108175 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.081 108175 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.081 108175 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.082 108175 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.082 108175 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.082 108175 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.082 108175 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.082 108175 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.082 108175 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.082 108175 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.083 108175 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.083 108175 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.083 108175 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.083 108175 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.083 108175 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.083 108175 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.083 108175 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.084 108175 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.084 108175 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.084 108175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.084 108175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.084 108175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.084 108175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.084 108175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.085 108175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.085 108175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.085 108175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.085 108175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.085 108175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.085 108175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.086 108175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.086 108175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.086 108175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.086 108175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.086 108175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.086 108175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.086 108175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.087 108175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.087 108175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.087 108175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.087 108175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.087 108175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.087 108175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.087 108175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.088 108175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.088 108175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.088 108175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.088 108175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.088 108175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.088 108175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.089 108175 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.089 108175 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.089 108175 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.089 108175 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 19:58:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:58:32.089 108175 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Feb 19 19:58:36 compute-0 sshd-session[108727]: Accepted publickey for zuul from 192.168.122.30 port 54588 ssh2: ECDSA SHA256:U7+XUhHIIKxaxeCtrtx4n7poU9CMVA2TmDaaiHbw4x0
Feb 19 19:58:36 compute-0 systemd-logind[810]: New session 22 of user zuul.
Feb 19 19:58:36 compute-0 systemd[1]: Started Session 22 of User zuul.
Feb 19 19:58:36 compute-0 sshd-session[108727]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 19 19:58:37 compute-0 python3.9[108880]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 19 19:58:38 compute-0 sudo[109034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pihkfvtoeiqtiikocsrqezgmvexjtowl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531118.0038223-29-23840367813066/AnsiballZ_command.py'
Feb 19 19:58:38 compute-0 sudo[109034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:38 compute-0 python3.9[109037]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:58:38 compute-0 sudo[109034]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:39 compute-0 sudo[109200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bleggdgofnimzormdzadalxpzenlekhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531118.9111876-40-127710339400047/AnsiballZ_systemd_service.py'
Feb 19 19:58:39 compute-0 sudo[109200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:39 compute-0 python3.9[109203]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 19 19:58:39 compute-0 systemd[1]: Reloading.
Feb 19 19:58:39 compute-0 systemd-sysv-generator[109229]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 19:58:39 compute-0 systemd-rc-local-generator[109222]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 19:58:39 compute-0 sudo[109200]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:40 compute-0 podman[109322]: 2026-02-19 19:58:40.389067542 +0000 UTC m=+0.078028437 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 19 19:58:40 compute-0 python3.9[109423]: ansible-ansible.builtin.service_facts Invoked
Feb 19 19:58:40 compute-0 network[109440]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 19 19:58:40 compute-0 network[109441]: 'network-scripts' will be removed from distribution in near future.
Feb 19 19:58:40 compute-0 network[109442]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 19 19:58:43 compute-0 sudo[109702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vajwpccbptssaiuwommnfrorjoatbgwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531122.8614314-59-161759441434801/AnsiballZ_systemd_service.py'
Feb 19 19:58:43 compute-0 sudo[109702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:43 compute-0 python3.9[109705]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 19:58:43 compute-0 sudo[109702]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:43 compute-0 sudo[109856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyouecgkkevdswieccqjffkslvmaiial ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531123.4675324-59-252676212795884/AnsiballZ_systemd_service.py'
Feb 19 19:58:43 compute-0 sudo[109856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:43 compute-0 python3.9[109859]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 19:58:44 compute-0 sudo[109856]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:44 compute-0 sudo[110010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-invyiganrlvurxybaofmecovteiaegbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531124.121651-59-225144139147181/AnsiballZ_systemd_service.py'
Feb 19 19:58:44 compute-0 sudo[110010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:44 compute-0 python3.9[110013]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 19:58:44 compute-0 sudo[110010]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:45 compute-0 sudo[110164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbvktupfshfcymsbrvvmojaeoccrmxqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531124.808879-59-53633035758218/AnsiballZ_systemd_service.py'
Feb 19 19:58:45 compute-0 sudo[110164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:45 compute-0 python3.9[110167]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 19:58:45 compute-0 sudo[110164]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:45 compute-0 sudo[110318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwknpngvfizzzuotxywekbqvsydykzyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531125.4444456-59-116843142524573/AnsiballZ_systemd_service.py'
Feb 19 19:58:45 compute-0 sudo[110318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:45 compute-0 python3.9[110321]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 19:58:45 compute-0 sudo[110318]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:46 compute-0 sudo[110472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztsmwycrkqyscjnauwwgnujermyylzdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531126.1088653-59-54379742857037/AnsiballZ_systemd_service.py'
Feb 19 19:58:46 compute-0 sudo[110472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:46 compute-0 python3.9[110475]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 19:58:46 compute-0 sudo[110472]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:47 compute-0 sudo[110626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjjynzzqgkcwvplbhnzynsfqepkkxhci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531126.8088028-59-67420300153309/AnsiballZ_systemd_service.py'
Feb 19 19:58:47 compute-0 sudo[110626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:47 compute-0 python3.9[110629]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 19:58:47 compute-0 sudo[110626]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:48 compute-0 sudo[110780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkvgyoqcjstnbcliccvqwoslwjrjqvwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531127.6514575-111-91650050107302/AnsiballZ_file.py'
Feb 19 19:58:48 compute-0 sudo[110780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:48 compute-0 python3.9[110783]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:58:48 compute-0 sudo[110780]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:48 compute-0 sudo[110933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khwozpbznoewijdhvjeeajwgelholjwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531128.3238165-111-169448760697240/AnsiballZ_file.py'
Feb 19 19:58:48 compute-0 sudo[110933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:48 compute-0 python3.9[110936]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:58:48 compute-0 sudo[110933]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:49 compute-0 sudo[111086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ystnicpojxairciojxcathuvslgemfpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531128.8503296-111-168269687624893/AnsiballZ_file.py'
Feb 19 19:58:49 compute-0 sudo[111086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:49 compute-0 python3.9[111089]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:58:49 compute-0 sudo[111086]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:49 compute-0 sudo[111239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlywjjnlkltzphsldymxmtgryrmynyoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531129.3253694-111-21435779748855/AnsiballZ_file.py'
Feb 19 19:58:49 compute-0 sudo[111239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:49 compute-0 python3.9[111242]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:58:49 compute-0 sudo[111239]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:50 compute-0 sudo[111392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iexrirvzzyxykfpxjkrlsxpxjnnuswof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531129.8211987-111-73146234052469/AnsiballZ_file.py'
Feb 19 19:58:50 compute-0 sudo[111392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:50 compute-0 python3.9[111395]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:58:50 compute-0 sudo[111392]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:50 compute-0 sudo[111545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdyeiwrzsgneflmxmbyntxapmjakztrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531130.4152668-111-230485444875802/AnsiballZ_file.py'
Feb 19 19:58:50 compute-0 sudo[111545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:50 compute-0 python3.9[111548]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:58:50 compute-0 sudo[111545]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:51 compute-0 sudo[111698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrqjglvqnxqrlcnhotbbjpvmfupykfsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531130.967766-111-156214505525576/AnsiballZ_file.py'
Feb 19 19:58:51 compute-0 sudo[111698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:51 compute-0 python3.9[111701]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:58:51 compute-0 sudo[111698]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:51 compute-0 sudo[111851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlwwgzbmwkuqilqlagwlqwxggyybzlfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531131.533217-161-54030674112166/AnsiballZ_file.py'
Feb 19 19:58:51 compute-0 sudo[111851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:51 compute-0 python3.9[111854]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:58:51 compute-0 sudo[111851]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:52 compute-0 sshd-session[111855]: Connection closed by 103.213.244.180 port 57508 [preauth]
Feb 19 19:58:52 compute-0 sudo[112006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxfetxowoxnhhqpkzmicuvatfeayyboi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531132.075709-161-178613134759906/AnsiballZ_file.py'
Feb 19 19:58:52 compute-0 sudo[112006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:52 compute-0 python3.9[112009]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:58:52 compute-0 sudo[112006]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:52 compute-0 sudo[112159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyiryoljqjrvftnunpxfafidvaiqjyke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531132.6375804-161-146218894402175/AnsiballZ_file.py'
Feb 19 19:58:52 compute-0 sudo[112159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:53 compute-0 python3.9[112162]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:58:53 compute-0 sudo[112159]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:53 compute-0 sudo[112312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oabvhendmojvnkbsrdukmehqkzdreckn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531133.1585546-161-49685904980590/AnsiballZ_file.py'
Feb 19 19:58:53 compute-0 sudo[112312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:53 compute-0 python3.9[112315]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:58:53 compute-0 sudo[112312]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:53 compute-0 sudo[112465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpvsppdqtpecyqhrvwdyobtkvklcekaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531133.7014503-161-92573844380745/AnsiballZ_file.py'
Feb 19 19:58:53 compute-0 sudo[112465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:54 compute-0 python3.9[112468]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:58:54 compute-0 sudo[112465]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:54 compute-0 sudo[112618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvpmhuezynmsrukwjxgthowuinczjggs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531134.2860532-161-142875751330/AnsiballZ_file.py'
Feb 19 19:58:54 compute-0 sudo[112618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:54 compute-0 python3.9[112621]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:58:54 compute-0 sudo[112618]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:55 compute-0 sudo[112771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcfofqnjyulxmqmpajogtlcidyoqtcer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531134.873837-161-279229647032523/AnsiballZ_file.py'
Feb 19 19:58:55 compute-0 sudo[112771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:55 compute-0 python3.9[112774]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 19:58:55 compute-0 sudo[112771]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:55 compute-0 sudo[112924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nilgglecwtqjsrngyodztwlzochihhir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531135.5227225-212-201941113634501/AnsiballZ_command.py'
Feb 19 19:58:55 compute-0 sudo[112924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:55 compute-0 python3.9[112927]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:58:55 compute-0 sudo[112924]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:56 compute-0 python3.9[113079]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 19 19:58:56 compute-0 sudo[113229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppgulrhruxneivfadwebwsodrgrhboum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531136.783528-230-85370290370042/AnsiballZ_systemd_service.py'
Feb 19 19:58:56 compute-0 sudo[113229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:57 compute-0 python3.9[113232]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 19 19:58:57 compute-0 systemd[1]: Reloading.
Feb 19 19:58:57 compute-0 systemd-rc-local-generator[113259]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 19:58:57 compute-0 systemd-sysv-generator[113262]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 19:58:57 compute-0 sudo[113229]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:57 compute-0 sudo[113424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrshregswmgdkzixsxspvudadklwnxpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531137.6226962-238-164845986601021/AnsiballZ_command.py'
Feb 19 19:58:57 compute-0 sudo[113424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:57 compute-0 python3.9[113427]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:58:57 compute-0 sudo[113424]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:58 compute-0 sudo[113578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjzenrsfonblnhkwzyhyqfccnwjszkms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531138.081736-238-44183357882652/AnsiballZ_command.py'
Feb 19 19:58:58 compute-0 sudo[113578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:58 compute-0 python3.9[113581]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:58:58 compute-0 sudo[113578]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:58 compute-0 sudo[113732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anjovdpyhxzucjkvtsxbcpoqleuiuwci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531138.5457785-238-215667519641189/AnsiballZ_command.py'
Feb 19 19:58:58 compute-0 sudo[113732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:58 compute-0 python3.9[113735]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:58:58 compute-0 sudo[113732]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:59 compute-0 podman[113737]: 2026-02-19 19:58:59.037235086 +0000 UTC m=+0.057696500 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 19:58:59 compute-0 sudo[113903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgqliwpnphxrkgaqsumwfploxjgkalfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531139.1042922-238-200137508896038/AnsiballZ_command.py'
Feb 19 19:58:59 compute-0 sudo[113903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:59 compute-0 python3.9[113906]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:58:59 compute-0 sudo[113903]: pam_unix(sudo:session): session closed for user root
Feb 19 19:58:59 compute-0 sudo[114057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aizytpxdwdixtqamjnqwywmvwoachirq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531139.5998514-238-220836921942595/AnsiballZ_command.py'
Feb 19 19:58:59 compute-0 sudo[114057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:58:59 compute-0 python3.9[114060]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:58:59 compute-0 sudo[114057]: pam_unix(sudo:session): session closed for user root
Feb 19 19:59:00 compute-0 sudo[114211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kszhjkexytjcjkmpmjiufcdfvqusygtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531140.076763-238-157726555039941/AnsiballZ_command.py'
Feb 19 19:59:00 compute-0 sudo[114211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:59:00 compute-0 python3.9[114214]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:59:00 compute-0 sudo[114211]: pam_unix(sudo:session): session closed for user root
Feb 19 19:59:00 compute-0 sudo[114365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyjacdwhtbeenkjsnyepknvyvrhqverj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531140.5516992-238-249916028002793/AnsiballZ_command.py'
Feb 19 19:59:00 compute-0 sudo[114365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:59:01 compute-0 python3.9[114368]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 19:59:01 compute-0 sudo[114365]: pam_unix(sudo:session): session closed for user root
Feb 19 19:59:01 compute-0 sudo[114519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cklmlqleadfuogzcymcdhprnfcdwdxcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531141.4132087-292-135789099972111/AnsiballZ_getent.py'
Feb 19 19:59:01 compute-0 sudo[114519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:59:01 compute-0 python3.9[114522]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Feb 19 19:59:01 compute-0 sudo[114519]: pam_unix(sudo:session): session closed for user root
Feb 19 19:59:02 compute-0 sudo[114673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbztlpiuobzrrbumlqgmcbddlvgstgma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531142.1592557-300-194547530435700/AnsiballZ_group.py'
Feb 19 19:59:02 compute-0 sudo[114673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:59:02 compute-0 python3.9[114676]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 19 19:59:02 compute-0 groupadd[114677]: group added to /etc/group: name=libvirt, GID=42473
Feb 19 19:59:02 compute-0 groupadd[114677]: group added to /etc/gshadow: name=libvirt
Feb 19 19:59:02 compute-0 groupadd[114677]: new group: name=libvirt, GID=42473
Feb 19 19:59:02 compute-0 sudo[114673]: pam_unix(sudo:session): session closed for user root
Feb 19 19:59:03 compute-0 sudo[114832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjpfkiwabsboiqrmfmrgkyzzcjlfvryn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531142.9625905-308-111436302985664/AnsiballZ_user.py'
Feb 19 19:59:03 compute-0 sudo[114832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:59:03 compute-0 python3.9[114835]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb 19 19:59:03 compute-0 useradd[114837]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/1
Feb 19 19:59:03 compute-0 sudo[114832]: pam_unix(sudo:session): session closed for user root
Feb 19 19:59:04 compute-0 sudo[114993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egfgyburqiknrllbtnhftlurjhvxagfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531143.9531932-319-94627814505089/AnsiballZ_setup.py'
Feb 19 19:59:04 compute-0 sudo[114993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:59:04 compute-0 python3.9[114996]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 19 19:59:04 compute-0 sudo[114993]: pam_unix(sudo:session): session closed for user root
Feb 19 19:59:05 compute-0 sudo[115078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dczwzgdybddaojyslbhktbsybpkokrqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531143.9531932-319-94627814505089/AnsiballZ_dnf.py'
Feb 19 19:59:05 compute-0 sudo[115078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 19:59:05 compute-0 python3.9[115081]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 19 19:59:11 compute-0 podman[115093]: 2026-02-19 19:59:11.412850228 +0000 UTC m=+0.093869990 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Feb 19 19:59:28 compute-0 sshd-session[115299]: Invalid user iksi from 125.31.2.160 port 34952
Feb 19 19:59:28 compute-0 sshd-session[115299]: Received disconnect from 125.31.2.160 port 34952:11: Bye Bye [preauth]
Feb 19 19:59:28 compute-0 sshd-session[115299]: Disconnected from invalid user iksi 125.31.2.160 port 34952 [preauth]
Feb 19 19:59:29 compute-0 podman[115301]: 2026-02-19 19:59:29.365870863 +0000 UTC m=+0.056493083 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent)
Feb 19 19:59:29 compute-0 kernel: SELinux:  Converting 2767 SID table entries...
Feb 19 19:59:29 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Feb 19 19:59:29 compute-0 kernel: SELinux:  policy capability open_perms=1
Feb 19 19:59:29 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Feb 19 19:59:29 compute-0 kernel: SELinux:  policy capability always_check_network=0
Feb 19 19:59:29 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 19 19:59:29 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 19 19:59:29 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 19 19:59:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:59:30.403 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 19:59:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:59:30.404 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 19:59:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 19:59:30.404 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 19:59:39 compute-0 kernel: SELinux:  Converting 2767 SID table entries...
Feb 19 19:59:39 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Feb 19 19:59:39 compute-0 kernel: SELinux:  policy capability open_perms=1
Feb 19 19:59:39 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Feb 19 19:59:39 compute-0 kernel: SELinux:  policy capability always_check_network=0
Feb 19 19:59:39 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 19 19:59:39 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 19 19:59:39 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 19 19:59:42 compute-0 dbus-broker-launch[789]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Feb 19 19:59:42 compute-0 podman[115337]: 2026-02-19 19:59:42.399960725 +0000 UTC m=+0.077878730 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 19 20:00:00 compute-0 podman[124710]: 2026-02-19 20:00:00.375848706 +0000 UTC m=+0.069958504 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:00:13 compute-0 podman[132277]: 2026-02-19 20:00:13.448145473 +0000 UTC m=+0.129370402 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Feb 19 20:00:14 compute-0 sshd-session[132275]: Received disconnect from 103.154.77.48 port 40892:11: Bye Bye [preauth]
Feb 19 20:00:14 compute-0 sshd-session[132275]: Disconnected from authenticating user root 103.154.77.48 port 40892 [preauth]
Feb 19 20:00:20 compute-0 kernel: SELinux:  Converting 2768 SID table entries...
Feb 19 20:00:20 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Feb 19 20:00:20 compute-0 kernel: SELinux:  policy capability open_perms=1
Feb 19 20:00:20 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Feb 19 20:00:20 compute-0 kernel: SELinux:  policy capability always_check_network=0
Feb 19 20:00:20 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 19 20:00:20 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 19 20:00:20 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 19 20:00:21 compute-0 groupadd[132315]: group added to /etc/group: name=dnsmasq, GID=993
Feb 19 20:00:21 compute-0 groupadd[132315]: group added to /etc/gshadow: name=dnsmasq
Feb 19 20:00:21 compute-0 groupadd[132315]: new group: name=dnsmasq, GID=993
Feb 19 20:00:21 compute-0 useradd[132322]: new user: name=dnsmasq, UID=992, GID=993, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Feb 19 20:00:21 compute-0 dbus-broker-launch[778]: Noticed file-system modification, trigger reload.
Feb 19 20:00:21 compute-0 dbus-broker-launch[789]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Feb 19 20:00:21 compute-0 dbus-broker-launch[778]: Noticed file-system modification, trigger reload.
Feb 19 20:00:22 compute-0 groupadd[132335]: group added to /etc/group: name=clevis, GID=992
Feb 19 20:00:22 compute-0 groupadd[132335]: group added to /etc/gshadow: name=clevis
Feb 19 20:00:22 compute-0 groupadd[132335]: new group: name=clevis, GID=992
Feb 19 20:00:22 compute-0 useradd[132342]: new user: name=clevis, UID=991, GID=992, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Feb 19 20:00:22 compute-0 usermod[132352]: add 'clevis' to group 'tss'
Feb 19 20:00:22 compute-0 usermod[132352]: add 'clevis' to shadow group 'tss'
Feb 19 20:00:24 compute-0 polkitd[44382]: Reloading rules
Feb 19 20:00:24 compute-0 polkitd[44382]: Collecting garbage unconditionally...
Feb 19 20:00:24 compute-0 polkitd[44382]: Loading rules from directory /etc/polkit-1/rules.d
Feb 19 20:00:24 compute-0 polkitd[44382]: Loading rules from directory /usr/share/polkit-1/rules.d
Feb 19 20:00:24 compute-0 polkitd[44382]: Finished loading, compiling and executing 3 rules
Feb 19 20:00:24 compute-0 polkitd[44382]: Reloading rules
Feb 19 20:00:24 compute-0 polkitd[44382]: Collecting garbage unconditionally...
Feb 19 20:00:24 compute-0 polkitd[44382]: Loading rules from directory /etc/polkit-1/rules.d
Feb 19 20:00:24 compute-0 polkitd[44382]: Loading rules from directory /usr/share/polkit-1/rules.d
Feb 19 20:00:24 compute-0 polkitd[44382]: Finished loading, compiling and executing 3 rules
Feb 19 20:00:25 compute-0 groupadd[132542]: group added to /etc/group: name=ceph, GID=167
Feb 19 20:00:25 compute-0 groupadd[132542]: group added to /etc/gshadow: name=ceph
Feb 19 20:00:25 compute-0 groupadd[132542]: new group: name=ceph, GID=167
Feb 19 20:00:25 compute-0 useradd[132548]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Feb 19 20:00:27 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Feb 19 20:00:27 compute-0 sshd[1015]: Received signal 15; terminating.
Feb 19 20:00:27 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Feb 19 20:00:27 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Feb 19 20:00:27 compute-0 systemd[1]: sshd.service: Consumed 2.000s CPU time, read 32.0K from disk, written 80.0K to disk.
Feb 19 20:00:27 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Feb 19 20:00:27 compute-0 systemd[1]: Stopping sshd-keygen.target...
Feb 19 20:00:27 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 19 20:00:27 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 19 20:00:27 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 19 20:00:27 compute-0 systemd[1]: Reached target sshd-keygen.target.
Feb 19 20:00:27 compute-0 systemd[1]: Starting OpenSSH server daemon...
Feb 19 20:00:27 compute-0 sshd[133067]: Server listening on 0.0.0.0 port 22.
Feb 19 20:00:27 compute-0 sshd[133067]: Server listening on :: port 22.
Feb 19 20:00:27 compute-0 systemd[1]: Started OpenSSH server daemon.
Feb 19 20:00:29 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 19 20:00:29 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 19 20:00:29 compute-0 systemd[1]: Reloading.
Feb 19 20:00:29 compute-0 systemd-rc-local-generator[133323]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:00:29 compute-0 systemd-sysv-generator[133329]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:00:29 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 19 20:00:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:00:30.403 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:00:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:00:30.405 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:00:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:00:30.405 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:00:31 compute-0 podman[136432]: 2026-02-19 20:00:31.380883364 +0000 UTC m=+0.070358995 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 19 20:00:31 compute-0 sudo[115078]: pam_unix(sudo:session): session closed for user root
Feb 19 20:00:32 compute-0 sudo[139027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjlcziuydxmcnifiutxeuchyajjjfcat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531232.0878592-331-168406445314562/AnsiballZ_systemd.py'
Feb 19 20:00:32 compute-0 sudo[139027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:00:32 compute-0 python3.9[139048]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 19 20:00:33 compute-0 systemd[1]: Reloading.
Feb 19 20:00:33 compute-0 systemd-rc-local-generator[139748]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:00:33 compute-0 systemd-sysv-generator[139754]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:00:33 compute-0 sudo[139027]: pam_unix(sudo:session): session closed for user root
Feb 19 20:00:33 compute-0 sudo[140745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldvogepjatwriupoufbnojzoqigvgcrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531233.3679645-331-248167920978626/AnsiballZ_systemd.py'
Feb 19 20:00:33 compute-0 sudo[140745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:00:33 compute-0 python3.9[140778]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 19 20:00:33 compute-0 systemd[1]: Reloading.
Feb 19 20:00:33 compute-0 systemd-sysv-generator[141362]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:00:33 compute-0 systemd-rc-local-generator[141359]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:00:34 compute-0 sudo[140745]: pam_unix(sudo:session): session closed for user root
Feb 19 20:00:34 compute-0 sudo[142302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkmdgxcmmlmzrucmxakebaauvfyiqbhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531234.2915912-331-238723281961774/AnsiballZ_systemd.py'
Feb 19 20:00:34 compute-0 sudo[142302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:00:34 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 19 20:00:34 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 19 20:00:34 compute-0 systemd[1]: man-db-cache-update.service: Consumed 6.524s CPU time.
Feb 19 20:00:34 compute-0 systemd[1]: run-r0f87aff78fce4b088567c8b19d0759f0.service: Deactivated successfully.
Feb 19 20:00:34 compute-0 python3.9[142305]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 19 20:00:34 compute-0 systemd[1]: Reloading.
Feb 19 20:00:34 compute-0 systemd-rc-local-generator[142338]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:00:34 compute-0 systemd-sysv-generator[142341]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:00:35 compute-0 sudo[142302]: pam_unix(sudo:session): session closed for user root
Feb 19 20:00:35 compute-0 sudo[142502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mehnvspzqhunozeexliudqjyuraxsyqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531235.186281-331-162727164636605/AnsiballZ_systemd.py'
Feb 19 20:00:35 compute-0 sudo[142502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:00:35 compute-0 python3.9[142505]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 19 20:00:35 compute-0 systemd[1]: Reloading.
Feb 19 20:00:35 compute-0 systemd-rc-local-generator[142530]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:00:35 compute-0 systemd-sysv-generator[142536]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:00:35 compute-0 sudo[142502]: pam_unix(sudo:session): session closed for user root
Feb 19 20:00:36 compute-0 sudo[142699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hecfbckzwfpedpomzuqdtmtzrjrramvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531236.0797834-360-2224925223212/AnsiballZ_systemd.py'
Feb 19 20:00:36 compute-0 sudo[142699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:00:36 compute-0 python3.9[142702]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 19 20:00:36 compute-0 systemd[1]: Reloading.
Feb 19 20:00:36 compute-0 systemd-rc-local-generator[142734]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:00:36 compute-0 systemd-sysv-generator[142737]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:00:36 compute-0 sudo[142699]: pam_unix(sudo:session): session closed for user root
Feb 19 20:00:37 compute-0 sudo[142897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvtaotgrnjmsdtsapmddsfxuiypubjnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531237.0547845-360-176746631169347/AnsiballZ_systemd.py'
Feb 19 20:00:37 compute-0 sudo[142897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:00:37 compute-0 python3.9[142900]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 19 20:00:37 compute-0 systemd[1]: Reloading.
Feb 19 20:00:37 compute-0 systemd-rc-local-generator[142929]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:00:37 compute-0 systemd-sysv-generator[142932]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:00:37 compute-0 sudo[142897]: pam_unix(sudo:session): session closed for user root
Feb 19 20:00:38 compute-0 sudo[143096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxqshtfpcfmgveesmbjfwdsbkcabzdyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531237.9807587-360-90220813261235/AnsiballZ_systemd.py'
Feb 19 20:00:38 compute-0 sudo[143096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:00:38 compute-0 python3.9[143099]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 19 20:00:38 compute-0 systemd[1]: Reloading.
Feb 19 20:00:38 compute-0 systemd-rc-local-generator[143126]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:00:38 compute-0 systemd-sysv-generator[143129]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:00:38 compute-0 sudo[143096]: pam_unix(sudo:session): session closed for user root
Feb 19 20:00:39 compute-0 sudo[143294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlzlvgqvdfronocgxexpeyxpcqbhtyou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531238.843831-360-44949348741715/AnsiballZ_systemd.py'
Feb 19 20:00:39 compute-0 sudo[143294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:00:39 compute-0 python3.9[143297]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 19 20:00:39 compute-0 sudo[143294]: pam_unix(sudo:session): session closed for user root
Feb 19 20:00:39 compute-0 sudo[143450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuydrcxpxdlicoprupjvlxieilpuaozd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531239.510862-360-167363006481640/AnsiballZ_systemd.py'
Feb 19 20:00:39 compute-0 sudo[143450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:00:40 compute-0 python3.9[143453]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 19 20:00:40 compute-0 systemd[1]: Reloading.
Feb 19 20:00:40 compute-0 systemd-rc-local-generator[143482]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:00:40 compute-0 systemd-sysv-generator[143488]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:00:40 compute-0 sudo[143450]: pam_unix(sudo:session): session closed for user root
Feb 19 20:00:40 compute-0 sudo[143648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhsevcbrrdwwnbnhowwvjrzupnvtgpkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531240.4418747-396-60837096222527/AnsiballZ_systemd.py'
Feb 19 20:00:40 compute-0 sudo[143648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:00:40 compute-0 python3.9[143651]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 19 20:00:41 compute-0 systemd[1]: Reloading.
Feb 19 20:00:41 compute-0 systemd-rc-local-generator[143675]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:00:41 compute-0 systemd-sysv-generator[143681]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:00:41 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Feb 19 20:00:41 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Feb 19 20:00:41 compute-0 sudo[143648]: pam_unix(sudo:session): session closed for user root
Feb 19 20:00:41 compute-0 sudo[143848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfmljojzzhowxvwwcuqfbnyaghkieypr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531241.4280112-404-254886297202931/AnsiballZ_systemd.py'
Feb 19 20:00:41 compute-0 sudo[143848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:00:41 compute-0 python3.9[143851]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 19 20:00:41 compute-0 sudo[143848]: pam_unix(sudo:session): session closed for user root
Feb 19 20:00:42 compute-0 sudo[144004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdhsnluxsacwrboubshkygzmtpqpzkpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531242.4897127-404-246327946172703/AnsiballZ_systemd.py'
Feb 19 20:00:42 compute-0 sudo[144004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:00:43 compute-0 python3.9[144007]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 19 20:00:43 compute-0 sudo[144004]: pam_unix(sudo:session): session closed for user root
Feb 19 20:00:43 compute-0 sudo[144160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okaprwxjxenguneystzfmwmwqzfrytxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531243.2164018-404-276022686198236/AnsiballZ_systemd.py'
Feb 19 20:00:43 compute-0 sudo[144160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:00:43 compute-0 podman[144162]: 2026-02-19 20:00:43.567923192 +0000 UTC m=+0.089602819 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb 19 20:00:43 compute-0 python3.9[144164]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 19 20:00:43 compute-0 sudo[144160]: pam_unix(sudo:session): session closed for user root
Feb 19 20:00:44 compute-0 sudo[144342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpwckzbyhzuovjpzvwcjtelefqnjwgtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531243.9498942-404-230700645469680/AnsiballZ_systemd.py'
Feb 19 20:00:44 compute-0 sudo[144342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:00:44 compute-0 python3.9[144345]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 19 20:00:44 compute-0 sudo[144342]: pam_unix(sudo:session): session closed for user root
Feb 19 20:00:44 compute-0 sudo[144498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkcvkcwblwlsevpraithziliydwyfipu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531244.6768036-404-83783933803041/AnsiballZ_systemd.py'
Feb 19 20:00:44 compute-0 sudo[144498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:00:45 compute-0 python3.9[144501]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 19 20:00:45 compute-0 sudo[144498]: pam_unix(sudo:session): session closed for user root
Feb 19 20:00:45 compute-0 sudo[144654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssonyoquaqrrhfxooxzmqhjaqwnkatqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531245.3970027-404-236869557999652/AnsiballZ_systemd.py'
Feb 19 20:00:45 compute-0 sudo[144654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:00:45 compute-0 python3.9[144657]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 19 20:00:45 compute-0 sudo[144654]: pam_unix(sudo:session): session closed for user root
Feb 19 20:00:46 compute-0 sudo[144810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uflcjnmqitayquhkcbjgwfmczoslwtia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531246.0944836-404-209189798291812/AnsiballZ_systemd.py'
Feb 19 20:00:46 compute-0 sudo[144810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:00:46 compute-0 python3.9[144813]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 19 20:00:46 compute-0 sudo[144810]: pam_unix(sudo:session): session closed for user root
Feb 19 20:00:47 compute-0 sudo[144966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcobtnvqtkpafxfourvxktrslmpasvgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531246.8081875-404-779482490626/AnsiballZ_systemd.py'
Feb 19 20:00:47 compute-0 sudo[144966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:00:47 compute-0 python3.9[144969]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 19 20:00:47 compute-0 sudo[144966]: pam_unix(sudo:session): session closed for user root
Feb 19 20:00:47 compute-0 sudo[145122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihqqjtzweerrgviptvsgiphifonycdsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531247.5410905-404-63483730734229/AnsiballZ_systemd.py'
Feb 19 20:00:47 compute-0 sudo[145122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:00:48 compute-0 python3.9[145125]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 19 20:00:48 compute-0 sudo[145122]: pam_unix(sudo:session): session closed for user root
Feb 19 20:00:48 compute-0 sudo[145278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emalalnzgkajaaiwrvgqviwzvspmlapg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531248.2868068-404-93055920805102/AnsiballZ_systemd.py'
Feb 19 20:00:48 compute-0 sudo[145278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:00:48 compute-0 python3.9[145281]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 19 20:00:48 compute-0 sudo[145278]: pam_unix(sudo:session): session closed for user root
Feb 19 20:00:49 compute-0 sudo[145434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypoqcfyknaedzaeaiebxyqmrnjhfrxca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531249.0341156-404-30813437886923/AnsiballZ_systemd.py'
Feb 19 20:00:49 compute-0 sudo[145434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:00:49 compute-0 python3.9[145437]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 19 20:00:49 compute-0 sudo[145434]: pam_unix(sudo:session): session closed for user root
Feb 19 20:00:50 compute-0 sudo[145590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypywfitixiwyoupubjucsxjyebxetfjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531249.818863-404-201965534707722/AnsiballZ_systemd.py'
Feb 19 20:00:50 compute-0 sudo[145590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:00:50 compute-0 python3.9[145593]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 19 20:00:50 compute-0 sudo[145590]: pam_unix(sudo:session): session closed for user root
Feb 19 20:00:50 compute-0 sudo[145746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brvqwwgfmypmrryenwaklkqcfyvutkuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531250.5298693-404-28913837390245/AnsiballZ_systemd.py'
Feb 19 20:00:50 compute-0 sudo[145746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:00:51 compute-0 python3.9[145749]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 19 20:00:51 compute-0 sudo[145746]: pam_unix(sudo:session): session closed for user root
Feb 19 20:00:51 compute-0 sudo[145902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujidqkpgetycwdloimnqsmcahvbblidg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531251.2508135-404-175402145304102/AnsiballZ_systemd.py'
Feb 19 20:00:51 compute-0 sudo[145902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:00:51 compute-0 python3.9[145905]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 19 20:00:51 compute-0 sudo[145902]: pam_unix(sudo:session): session closed for user root
Feb 19 20:00:52 compute-0 sudo[146058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dniuakuawserwttznoycfkiznlqxgquo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531252.1742442-506-153767602565524/AnsiballZ_file.py'
Feb 19 20:00:52 compute-0 sudo[146058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:00:52 compute-0 python3.9[146061]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:00:52 compute-0 sudo[146058]: pam_unix(sudo:session): session closed for user root
Feb 19 20:00:52 compute-0 sudo[146211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihmikrbaxcgsvyfuaiuifjatwdkwxscq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531252.7338653-506-147927053966312/AnsiballZ_file.py'
Feb 19 20:00:52 compute-0 sudo[146211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:00:53 compute-0 python3.9[146214]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:00:53 compute-0 sudo[146211]: pam_unix(sudo:session): session closed for user root
Feb 19 20:00:53 compute-0 sudo[146364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bthplsarkoxijuqxjjnufnagbhlgizal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531253.2675805-506-70299836004059/AnsiballZ_file.py'
Feb 19 20:00:53 compute-0 sudo[146364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:00:53 compute-0 python3.9[146367]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:00:53 compute-0 sudo[146364]: pam_unix(sudo:session): session closed for user root
Feb 19 20:00:54 compute-0 sudo[146517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvuhuyilxmamfnlsodphikjhcjpndqdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531253.818426-506-264393210173845/AnsiballZ_file.py'
Feb 19 20:00:54 compute-0 sudo[146517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:00:54 compute-0 python3.9[146520]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:00:54 compute-0 sudo[146517]: pam_unix(sudo:session): session closed for user root
Feb 19 20:00:54 compute-0 sudo[146670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utqruezeggavtoksgdxhghjvkozaseez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531254.3403711-506-225773574508685/AnsiballZ_file.py'
Feb 19 20:00:54 compute-0 sudo[146670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:00:54 compute-0 python3.9[146673]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:00:54 compute-0 sudo[146670]: pam_unix(sudo:session): session closed for user root
Feb 19 20:00:55 compute-0 sudo[146823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ungjxgasdpqgdugewyznkwcxrqlzwisn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531254.9070463-506-131244049114055/AnsiballZ_file.py'
Feb 19 20:00:55 compute-0 sudo[146823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:00:55 compute-0 python3.9[146826]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:00:55 compute-0 sudo[146823]: pam_unix(sudo:session): session closed for user root
Feb 19 20:00:56 compute-0 python3.9[146976]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 19 20:00:56 compute-0 sudo[147126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhcoxslmgbfozmdpmrvkbzraywamxggk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531256.2672052-557-92886650375936/AnsiballZ_stat.py'
Feb 19 20:00:56 compute-0 sudo[147126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:00:56 compute-0 python3.9[147129]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:00:56 compute-0 sudo[147126]: pam_unix(sudo:session): session closed for user root
Feb 19 20:00:57 compute-0 sudo[147252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuxqiwdykimpkzydfgaxgcpwxchekwlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531256.2672052-557-92886650375936/AnsiballZ_copy.py'
Feb 19 20:00:57 compute-0 sudo[147252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:00:57 compute-0 python3.9[147255]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771531256.2672052-557-92886650375936/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:00:57 compute-0 sudo[147252]: pam_unix(sudo:session): session closed for user root
Feb 19 20:00:57 compute-0 sudo[147405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isxwvugjorlhmiawrmmfbwivknsomaaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531257.706964-557-116798574805236/AnsiballZ_stat.py'
Feb 19 20:00:57 compute-0 sudo[147405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:00:58 compute-0 python3.9[147408]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:00:58 compute-0 sudo[147405]: pam_unix(sudo:session): session closed for user root
Feb 19 20:00:58 compute-0 sudo[147531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjvtrhasqcnranlytgnvanscooednuyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531257.706964-557-116798574805236/AnsiballZ_copy.py'
Feb 19 20:00:58 compute-0 sudo[147531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:00:58 compute-0 python3.9[147534]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771531257.706964-557-116798574805236/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:00:58 compute-0 sudo[147531]: pam_unix(sudo:session): session closed for user root
Feb 19 20:00:59 compute-0 sudo[147684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzxcttredgwwrsssvosrapzcefoqeitz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531258.7984283-557-136082778231254/AnsiballZ_stat.py'
Feb 19 20:00:59 compute-0 sudo[147684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:00:59 compute-0 python3.9[147687]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:00:59 compute-0 sudo[147684]: pam_unix(sudo:session): session closed for user root
Feb 19 20:00:59 compute-0 sudo[147810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aeqrcnofljgilgsxbypwammplssrnnnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531258.7984283-557-136082778231254/AnsiballZ_copy.py'
Feb 19 20:00:59 compute-0 sudo[147810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:00:59 compute-0 python3.9[147813]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771531258.7984283-557-136082778231254/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:00:59 compute-0 sudo[147810]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:00 compute-0 sudo[147965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lczitpnhoeqbkkkflwiimwtrriluypph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531259.802718-557-86370125897099/AnsiballZ_stat.py'
Feb 19 20:01:00 compute-0 sudo[147965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:00 compute-0 python3.9[147968]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:01:00 compute-0 sudo[147965]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:00 compute-0 sudo[148091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgnxcczufdvicfnyecjdrurlwmmosqgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531259.802718-557-86370125897099/AnsiballZ_copy.py'
Feb 19 20:01:00 compute-0 sudo[148091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:00 compute-0 python3.9[148094]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771531259.802718-557-86370125897099/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:00 compute-0 sudo[148091]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:01 compute-0 sudo[148244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmaixdswooyqgbkqphjqngyndovwntdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531260.9233985-557-27393797558278/AnsiballZ_stat.py'
Feb 19 20:01:01 compute-0 sudo[148244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:01 compute-0 sshd-session[147814]: Invalid user vasilev from 103.213.238.91 port 46360
Feb 19 20:01:01 compute-0 CROND[148249]: (root) CMD (run-parts /etc/cron.hourly)
Feb 19 20:01:01 compute-0 run-parts[148252]: (/etc/cron.hourly) starting 0anacron
Feb 19 20:01:01 compute-0 anacron[148260]: Anacron started on 2026-02-19
Feb 19 20:01:01 compute-0 anacron[148260]: Will run job `cron.daily' in 22 min.
Feb 19 20:01:01 compute-0 anacron[148260]: Will run job `cron.weekly' in 42 min.
Feb 19 20:01:01 compute-0 anacron[148260]: Will run job `cron.monthly' in 62 min.
Feb 19 20:01:01 compute-0 anacron[148260]: Jobs will be executed sequentially
Feb 19 20:01:01 compute-0 run-parts[148262]: (/etc/cron.hourly) finished 0anacron
Feb 19 20:01:01 compute-0 CROND[148248]: (root) CMDEND (run-parts /etc/cron.hourly)
Feb 19 20:01:01 compute-0 python3.9[148247]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:01:01 compute-0 sudo[148244]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:01 compute-0 sshd-session[147814]: Received disconnect from 103.213.238.91 port 46360:11: Bye Bye [preauth]
Feb 19 20:01:01 compute-0 sshd-session[147814]: Disconnected from invalid user vasilev 103.213.238.91 port 46360 [preauth]
Feb 19 20:01:01 compute-0 podman[148265]: 2026-02-19 20:01:01.513239459 +0000 UTC m=+0.089434153 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 19 20:01:01 compute-0 sudo[148407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlcmbmkjdddsziffounsyvvpmgmphtgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531260.9233985-557-27393797558278/AnsiballZ_copy.py'
Feb 19 20:01:01 compute-0 sudo[148407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:01 compute-0 python3.9[148410]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771531260.9233985-557-27393797558278/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:01 compute-0 sudo[148407]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:02 compute-0 sudo[148560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itypmyrnnxjuypeakqvuxffibkebycym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531261.9734852-557-66615838565633/AnsiballZ_stat.py'
Feb 19 20:01:02 compute-0 sudo[148560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:02 compute-0 python3.9[148563]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:01:02 compute-0 sudo[148560]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:02 compute-0 sudo[148686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyqjiajbwlutzsykavevrbacpdpghuov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531261.9734852-557-66615838565633/AnsiballZ_copy.py'
Feb 19 20:01:02 compute-0 sudo[148686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:02 compute-0 python3.9[148689]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771531261.9734852-557-66615838565633/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:02 compute-0 sudo[148686]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:03 compute-0 sudo[148839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atmtlfnfditrnaqtxytmlocycezygzky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531262.949454-557-151251485672819/AnsiballZ_stat.py'
Feb 19 20:01:03 compute-0 sudo[148839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:03 compute-0 python3.9[148842]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:01:03 compute-0 sudo[148839]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:03 compute-0 sudo[148963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugbdynitiwlegzzwunvxtodeigwahmib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531262.949454-557-151251485672819/AnsiballZ_copy.py'
Feb 19 20:01:03 compute-0 sudo[148963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:03 compute-0 python3.9[148966]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771531262.949454-557-151251485672819/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:03 compute-0 sudo[148963]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:04 compute-0 sudo[149116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzqwuhbimrhuodrtkpcwwvqmlbontbzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531264.033233-557-161157157081028/AnsiballZ_stat.py'
Feb 19 20:01:04 compute-0 sudo[149116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:04 compute-0 python3.9[149119]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:01:04 compute-0 sudo[149116]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:04 compute-0 sudo[149242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sstdufjrqxcmttgsagbawtimczkisbvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531264.033233-557-161157157081028/AnsiballZ_copy.py'
Feb 19 20:01:04 compute-0 sudo[149242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:04 compute-0 python3.9[149245]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771531264.033233-557-161157157081028/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:04 compute-0 sudo[149242]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:05 compute-0 sudo[149395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghwckjainhmklkudsxdgukfsgirfxsjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531265.122014-670-141568641866609/AnsiballZ_command.py'
Feb 19 20:01:05 compute-0 sudo[149395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:05 compute-0 python3.9[149398]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Feb 19 20:01:05 compute-0 sudo[149395]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:05 compute-0 sudo[149549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uewkbeuwbvekbpvozvnciqsqpkliyyto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531265.7581327-679-279873642095663/AnsiballZ_file.py'
Feb 19 20:01:05 compute-0 sudo[149549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:06 compute-0 python3.9[149552]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:06 compute-0 sudo[149549]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:06 compute-0 sudo[149702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytcnzcxsydhncczdjtpnujvyacjartin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531266.2794673-679-56830665621626/AnsiballZ_file.py'
Feb 19 20:01:06 compute-0 sudo[149702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:06 compute-0 python3.9[149705]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:06 compute-0 sudo[149702]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:06 compute-0 sudo[149855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkezpqosqgwdvnsiwrfzojlswsexzore ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531266.7947125-679-231258473111132/AnsiballZ_file.py'
Feb 19 20:01:06 compute-0 sudo[149855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:07 compute-0 python3.9[149858]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:07 compute-0 sudo[149855]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:07 compute-0 sudo[150008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqswpqqdumlvplbqaukualjzweghvmqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531267.3041897-679-193668969508206/AnsiballZ_file.py'
Feb 19 20:01:07 compute-0 sudo[150008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:07 compute-0 python3.9[150011]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:07 compute-0 sudo[150008]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:08 compute-0 sudo[150161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cowhviqpsnhecrkrempkwuudqwayvcnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531267.8723373-679-193842301687004/AnsiballZ_file.py'
Feb 19 20:01:08 compute-0 sudo[150161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:08 compute-0 python3.9[150164]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:08 compute-0 sudo[150161]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:08 compute-0 sudo[150316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrhhdhqleccixylmskofwcouhwloeyul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531268.3585172-679-240322646328305/AnsiballZ_file.py'
Feb 19 20:01:08 compute-0 sudo[150316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:08 compute-0 python3.9[150319]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:08 compute-0 sudo[150316]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:08 compute-0 sshd-session[150165]: Invalid user n8n from 83.235.16.111 port 45922
Feb 19 20:01:09 compute-0 sshd-session[150165]: Received disconnect from 83.235.16.111 port 45922:11: Bye Bye [preauth]
Feb 19 20:01:09 compute-0 sshd-session[150165]: Disconnected from invalid user n8n 83.235.16.111 port 45922 [preauth]
Feb 19 20:01:09 compute-0 sudo[150469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rikofzdgaflhdqgpefptptxseoeqergq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531268.8781595-679-62281991674340/AnsiballZ_file.py'
Feb 19 20:01:09 compute-0 sudo[150469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:09 compute-0 python3.9[150472]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:09 compute-0 sudo[150469]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:09 compute-0 sudo[150622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuafjnekitdiubevmuwlyweeqshposjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531269.4312522-679-259512528371111/AnsiballZ_file.py'
Feb 19 20:01:09 compute-0 sudo[150622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:09 compute-0 python3.9[150625]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:09 compute-0 sudo[150622]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:10 compute-0 sudo[150775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyhtkopksmcgsunbnmbfhvlrltoghksc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531270.0098393-679-176062889475222/AnsiballZ_file.py'
Feb 19 20:01:10 compute-0 sudo[150775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:10 compute-0 python3.9[150778]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:10 compute-0 sudo[150775]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:10 compute-0 sudo[150928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhudgmodizmpwhhntzdjowazxbizigvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531270.5799792-679-191353666027336/AnsiballZ_file.py'
Feb 19 20:01:10 compute-0 sudo[150928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:10 compute-0 python3.9[150931]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:11 compute-0 sudo[150928]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:11 compute-0 sudo[151081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jybfqsmguoblsaibuaurdoxivgqlyzdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531271.1165292-679-247598274379963/AnsiballZ_file.py'
Feb 19 20:01:11 compute-0 sudo[151081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:11 compute-0 python3.9[151084]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:11 compute-0 sudo[151081]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:11 compute-0 sudo[151234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogjlovienumesfzepbrnrmovrxozjjix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531271.6111517-679-13975973494624/AnsiballZ_file.py'
Feb 19 20:01:11 compute-0 sudo[151234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:12 compute-0 python3.9[151237]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:12 compute-0 sudo[151234]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:12 compute-0 sudo[151387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjgebcclaohvolevnbgczpwlqxzpplxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531272.1520889-679-145919906224224/AnsiballZ_file.py'
Feb 19 20:01:12 compute-0 sudo[151387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:12 compute-0 python3.9[151390]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:12 compute-0 sudo[151387]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:12 compute-0 sudo[151540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkzfgxplsqdgvuwembiadwgbixrmbrei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531272.6296604-679-223897178333837/AnsiballZ_file.py'
Feb 19 20:01:12 compute-0 sudo[151540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:13 compute-0 python3.9[151543]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:13 compute-0 sudo[151540]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:13 compute-0 sudo[151693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqaldxlrvhcbebvdpvfaadqqgwazluyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531273.2209167-778-213557172767668/AnsiballZ_stat.py'
Feb 19 20:01:13 compute-0 sudo[151693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:13 compute-0 python3.9[151696]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:01:13 compute-0 sudo[151693]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:13 compute-0 podman[151697]: 2026-02-19 20:01:13.795730257 +0000 UTC m=+0.084425669 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 19 20:01:13 compute-0 sudo[151843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-reiigvjvfgmnrzbncdttkmvocconwzez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531273.2209167-778-213557172767668/AnsiballZ_copy.py'
Feb 19 20:01:14 compute-0 sudo[151843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:14 compute-0 python3.9[151846]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771531273.2209167-778-213557172767668/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:14 compute-0 sudo[151843]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:14 compute-0 sudo[151996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaejehugfrahpywqnzstowugpxigedgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531274.3350358-778-196657710761289/AnsiballZ_stat.py'
Feb 19 20:01:14 compute-0 sudo[151996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:14 compute-0 python3.9[151999]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:01:14 compute-0 sudo[151996]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:15 compute-0 sudo[152120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hubxxidhyuqkpjwekhfesgysmrjzdpbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531274.3350358-778-196657710761289/AnsiballZ_copy.py'
Feb 19 20:01:15 compute-0 sudo[152120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:15 compute-0 python3.9[152123]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771531274.3350358-778-196657710761289/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:15 compute-0 sudo[152120]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:15 compute-0 sudo[152273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyhjpbpeebodpembfogkaeohnoaowqdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531275.369024-778-246268268661828/AnsiballZ_stat.py'
Feb 19 20:01:15 compute-0 sudo[152273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:15 compute-0 python3.9[152276]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:01:15 compute-0 sudo[152273]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:16 compute-0 sudo[152397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwgbkzyqoiksfdmxtlrsdrubomvjlotr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531275.369024-778-246268268661828/AnsiballZ_copy.py'
Feb 19 20:01:16 compute-0 sudo[152397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:16 compute-0 python3.9[152400]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771531275.369024-778-246268268661828/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:16 compute-0 sudo[152397]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:16 compute-0 sudo[152550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfapypbaffvmppyjttnitiwuyvnxrpkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531276.3621047-778-224345189394947/AnsiballZ_stat.py'
Feb 19 20:01:16 compute-0 sudo[152550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:16 compute-0 python3.9[152553]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:01:16 compute-0 sudo[152550]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:17 compute-0 sudo[152674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onbbooujijazsbturkqcwmrlpgyssjgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531276.3621047-778-224345189394947/AnsiballZ_copy.py'
Feb 19 20:01:17 compute-0 sudo[152674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:17 compute-0 python3.9[152677]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771531276.3621047-778-224345189394947/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:17 compute-0 sudo[152674]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:17 compute-0 sudo[152827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raalebfzpimwmgwyipqfnljufgckmwwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531277.3445907-778-240021021647269/AnsiballZ_stat.py'
Feb 19 20:01:17 compute-0 sudo[152827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:17 compute-0 python3.9[152830]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:01:17 compute-0 sudo[152827]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:18 compute-0 sudo[152951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyllvkbgytirsgndldrvtjbvjuukfxeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531277.3445907-778-240021021647269/AnsiballZ_copy.py'
Feb 19 20:01:18 compute-0 sudo[152951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:18 compute-0 python3.9[152954]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771531277.3445907-778-240021021647269/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:18 compute-0 sudo[152951]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:18 compute-0 sudo[153104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwgkkqdjplxfqvnwkwmffewdjqwtbxux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531278.3913422-778-224919818137002/AnsiballZ_stat.py'
Feb 19 20:01:18 compute-0 sudo[153104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:18 compute-0 python3.9[153107]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:01:18 compute-0 sudo[153104]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:19 compute-0 sudo[153228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfswjpsupbielxsroywqrfoybrntfynm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531278.3913422-778-224919818137002/AnsiballZ_copy.py'
Feb 19 20:01:19 compute-0 sudo[153228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:19 compute-0 python3.9[153231]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771531278.3913422-778-224919818137002/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:19 compute-0 sudo[153228]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:19 compute-0 sudo[153381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpolsksfzvmwypthcvzmkmfiahwxtzpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531279.4076002-778-150414884420056/AnsiballZ_stat.py'
Feb 19 20:01:19 compute-0 sudo[153381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:19 compute-0 python3.9[153384]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:01:19 compute-0 sudo[153381]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:20 compute-0 sudo[153505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymekojckpcmgwpmrwccmlkkntwlscvrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531279.4076002-778-150414884420056/AnsiballZ_copy.py'
Feb 19 20:01:20 compute-0 sudo[153505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:20 compute-0 python3.9[153508]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771531279.4076002-778-150414884420056/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:20 compute-0 sudo[153505]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:20 compute-0 sudo[153658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzmvjoztzwoblztblfecwtwifooqpzqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531280.4154685-778-270540585764545/AnsiballZ_stat.py'
Feb 19 20:01:20 compute-0 sudo[153658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:20 compute-0 python3.9[153661]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:01:20 compute-0 sudo[153658]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:21 compute-0 sudo[153782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pufblecsijfiwcjraoucjczdhalvidbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531280.4154685-778-270540585764545/AnsiballZ_copy.py'
Feb 19 20:01:21 compute-0 sudo[153782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:21 compute-0 python3.9[153785]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771531280.4154685-778-270540585764545/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:21 compute-0 sudo[153782]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:21 compute-0 sudo[153935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqplimkipsmlcbmziaeopmmvrttzwach ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531281.5130777-778-90191313455240/AnsiballZ_stat.py'
Feb 19 20:01:21 compute-0 sudo[153935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:21 compute-0 python3.9[153938]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:01:21 compute-0 sudo[153935]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:22 compute-0 sudo[154059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkpfstdemijqdlbrpyyvywcyzzmutkxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531281.5130777-778-90191313455240/AnsiballZ_copy.py'
Feb 19 20:01:22 compute-0 sudo[154059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:22 compute-0 python3.9[154062]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771531281.5130777-778-90191313455240/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:22 compute-0 sudo[154059]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:22 compute-0 sudo[154212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zszxbvilsujpmghqreeusuwmghkkyqwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531282.5006516-778-32989340160615/AnsiballZ_stat.py'
Feb 19 20:01:22 compute-0 sudo[154212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:22 compute-0 python3.9[154215]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:01:22 compute-0 sudo[154212]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:23 compute-0 sudo[154336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaqbwjbnwrhucwwafbeskwqjxkidnlax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531282.5006516-778-32989340160615/AnsiballZ_copy.py'
Feb 19 20:01:23 compute-0 sudo[154336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:23 compute-0 python3.9[154339]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771531282.5006516-778-32989340160615/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:23 compute-0 sudo[154336]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:23 compute-0 sudo[154489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkgctjystolsseowgnyqzlygsivvdwrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531283.4873884-778-164527133368625/AnsiballZ_stat.py'
Feb 19 20:01:23 compute-0 sudo[154489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:23 compute-0 python3.9[154492]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:01:23 compute-0 sudo[154489]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:24 compute-0 sudo[154613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyzjuvuphzppbffkozpxcprumknfizoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531283.4873884-778-164527133368625/AnsiballZ_copy.py'
Feb 19 20:01:24 compute-0 sudo[154613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:24 compute-0 python3.9[154616]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771531283.4873884-778-164527133368625/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:24 compute-0 sudo[154613]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:24 compute-0 sudo[154766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeunjrxinabynljzkxbscsgkrlvczvby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531284.5006988-778-131017062770135/AnsiballZ_stat.py'
Feb 19 20:01:24 compute-0 sudo[154766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:24 compute-0 python3.9[154769]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:01:24 compute-0 sudo[154766]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:25 compute-0 sudo[154890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krhfmbcrvyglyzqsefiedxdrluacakvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531284.5006988-778-131017062770135/AnsiballZ_copy.py'
Feb 19 20:01:25 compute-0 sudo[154890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:25 compute-0 python3.9[154893]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771531284.5006988-778-131017062770135/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:25 compute-0 sudo[154890]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:25 compute-0 sudo[155043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-goflsgdbnvpzghnemtwjwkccsxoxmkkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531285.4499393-778-251321311431833/AnsiballZ_stat.py'
Feb 19 20:01:25 compute-0 sudo[155043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:25 compute-0 python3.9[155046]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:01:25 compute-0 sudo[155043]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:26 compute-0 sudo[155167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlvlevsnakjtrvlssdpddhcwxsushtbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531285.4499393-778-251321311431833/AnsiballZ_copy.py'
Feb 19 20:01:26 compute-0 sudo[155167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:26 compute-0 python3.9[155170]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771531285.4499393-778-251321311431833/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:26 compute-0 sudo[155167]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:26 compute-0 sudo[155320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnttsqazshxduysocbphngligtwtgcex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531286.3735092-778-111403707400330/AnsiballZ_stat.py'
Feb 19 20:01:26 compute-0 sudo[155320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:26 compute-0 python3.9[155323]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:01:26 compute-0 sudo[155320]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:27 compute-0 sudo[155444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbbwtubvfzauoyykijhxefarmpyqgnlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531286.3735092-778-111403707400330/AnsiballZ_copy.py'
Feb 19 20:01:27 compute-0 sudo[155444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:27 compute-0 python3.9[155447]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771531286.3735092-778-111403707400330/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:27 compute-0 sudo[155444]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:27 compute-0 python3.9[155597]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 20:01:28 compute-0 sudo[155750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjwrtmughugbgplzwwdqaewfzvqichga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531288.1179392-984-85874431685922/AnsiballZ_seboolean.py'
Feb 19 20:01:28 compute-0 sudo[155750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:28 compute-0 python3.9[155753]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Feb 19 20:01:29 compute-0 sudo[155750]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:30 compute-0 sudo[155907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbrfllhpyescmkzmywrxwcdhvdeuscab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531289.9722698-992-60264533642720/AnsiballZ_copy.py'
Feb 19 20:01:30 compute-0 dbus-broker-launch[789]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Feb 19 20:01:30 compute-0 sudo[155907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:30 compute-0 python3.9[155910]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:30 compute-0 sudo[155907]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:01:30.404 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:01:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:01:30.405 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:01:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:01:30.405 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:01:30 compute-0 sudo[156060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xntrpbtazekwycenflowlqeegzfbmkgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531290.5165808-992-169311397262493/AnsiballZ_copy.py'
Feb 19 20:01:30 compute-0 sudo[156060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:30 compute-0 python3.9[156063]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:30 compute-0 sudo[156060]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:31 compute-0 sudo[156213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsdgoguqonqysroayuxkruzdrzcrwxni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531291.0479603-992-241455986139205/AnsiballZ_copy.py'
Feb 19 20:01:31 compute-0 sudo[156213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:31 compute-0 python3.9[156216]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:31 compute-0 sudo[156213]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:31 compute-0 sudo[156378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcjwdsktwwaveehkkvnffjmnmkpcnodu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531291.6281395-992-45790560782418/AnsiballZ_copy.py'
Feb 19 20:01:31 compute-0 sudo[156378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:31 compute-0 podman[156340]: 2026-02-19 20:01:31.885015004 +0000 UTC m=+0.046178529 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Feb 19 20:01:32 compute-0 python3.9[156385]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:32 compute-0 sudo[156378]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:32 compute-0 sudo[156535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmdoxbrhqbmjuzuxqtlbwsfycvxccffh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531292.160362-992-189653323601767/AnsiballZ_copy.py'
Feb 19 20:01:32 compute-0 sudo[156535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:32 compute-0 python3.9[156538]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:32 compute-0 sudo[156535]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:33 compute-0 sudo[156688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owpohicqawckxkpxfeextwqxzwyetfpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531292.8780324-1028-82904057203450/AnsiballZ_copy.py'
Feb 19 20:01:33 compute-0 sudo[156688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:33 compute-0 python3.9[156691]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:33 compute-0 sudo[156688]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:33 compute-0 sudo[156841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfehwogrtaocuvygktswrdbcwjbvyxja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531293.45444-1028-277988532777614/AnsiballZ_copy.py'
Feb 19 20:01:33 compute-0 sudo[156841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:33 compute-0 python3.9[156844]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:33 compute-0 sudo[156841]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:34 compute-0 sudo[156994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfltkvdxcjxefevfjyvuomjaiwyuvhrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531294.004403-1028-30187234795090/AnsiballZ_copy.py'
Feb 19 20:01:34 compute-0 sudo[156994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:34 compute-0 python3.9[156997]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:34 compute-0 sudo[156994]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:34 compute-0 sudo[157147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvihklbvaecpapbqexndiuubqyzknuam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531294.556221-1028-127216768290378/AnsiballZ_copy.py'
Feb 19 20:01:34 compute-0 sudo[157147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:35 compute-0 python3.9[157150]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:35 compute-0 sudo[157147]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:35 compute-0 sudo[157300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvvhnhqbkbqmarhfkyefussjkrrbhegn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531295.185935-1028-186992410285684/AnsiballZ_copy.py'
Feb 19 20:01:35 compute-0 sudo[157300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:35 compute-0 python3.9[157303]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:35 compute-0 sudo[157300]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:35 compute-0 sudo[157453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmgsomywofcezkpjkzeuvbpslinqarxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531295.7402737-1064-212090243917312/AnsiballZ_systemd.py'
Feb 19 20:01:35 compute-0 sudo[157453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:36 compute-0 python3.9[157456]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 19 20:01:36 compute-0 systemd[1]: Reloading.
Feb 19 20:01:36 compute-0 systemd-sysv-generator[157487]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:01:36 compute-0 systemd-rc-local-generator[157483]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:01:36 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Feb 19 20:01:36 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Feb 19 20:01:36 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Feb 19 20:01:36 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Feb 19 20:01:36 compute-0 systemd[1]: Starting libvirt logging daemon...
Feb 19 20:01:36 compute-0 systemd[1]: Started libvirt logging daemon.
Feb 19 20:01:36 compute-0 sudo[157453]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:37 compute-0 sudo[157655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfuubsnwdlophmdjikrepexwrxtxzseq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531296.7462547-1064-117936758495179/AnsiballZ_systemd.py'
Feb 19 20:01:37 compute-0 sudo[157655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:37 compute-0 python3.9[157658]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 19 20:01:37 compute-0 systemd[1]: Reloading.
Feb 19 20:01:37 compute-0 systemd-sysv-generator[157690]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:01:37 compute-0 systemd-rc-local-generator[157683]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:01:37 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Feb 19 20:01:37 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Feb 19 20:01:37 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Feb 19 20:01:37 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Feb 19 20:01:37 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Feb 19 20:01:37 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Feb 19 20:01:37 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Feb 19 20:01:37 compute-0 systemd[1]: Started libvirt nodedev daemon.
Feb 19 20:01:37 compute-0 sudo[157655]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:38 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Feb 19 20:01:38 compute-0 sudo[157880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkhywtvwuckaenjoaogcyzyabdomupnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531297.8295166-1064-130571185384271/AnsiballZ_systemd.py'
Feb 19 20:01:38 compute-0 sudo[157880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:38 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Feb 19 20:01:38 compute-0 python3.9[157883]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 19 20:01:38 compute-0 systemd[1]: Reloading.
Feb 19 20:01:38 compute-0 systemd-rc-local-generator[157912]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:01:38 compute-0 systemd-sysv-generator[157916]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:01:38 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Feb 19 20:01:38 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Feb 19 20:01:38 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Feb 19 20:01:38 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Feb 19 20:01:38 compute-0 systemd[1]: Starting libvirt proxy daemon...
Feb 19 20:01:38 compute-0 systemd[1]: Started libvirt proxy daemon.
Feb 19 20:01:38 compute-0 sudo[157880]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:38 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Feb 19 20:01:38 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Feb 19 20:01:39 compute-0 sudo[158108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxhbgmtlvsskqfsyawruenliwrllhqbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531298.8464844-1064-213468918516057/AnsiballZ_systemd.py'
Feb 19 20:01:39 compute-0 sudo[158108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:39 compute-0 python3.9[158111]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 19 20:01:39 compute-0 systemd[1]: Reloading.
Feb 19 20:01:39 compute-0 systemd-sysv-generator[158143]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:01:39 compute-0 systemd-rc-local-generator[158140]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:01:39 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Feb 19 20:01:39 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Feb 19 20:01:39 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Feb 19 20:01:39 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Feb 19 20:01:39 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Feb 19 20:01:39 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Feb 19 20:01:39 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Feb 19 20:01:39 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Feb 19 20:01:39 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Feb 19 20:01:39 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Feb 19 20:01:39 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Feb 19 20:01:39 compute-0 setroubleshoot[157853]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l b7483c53-0360-4506-a60f-49199e54be1c
Feb 19 20:01:39 compute-0 systemd[1]: Started libvirt QEMU daemon.
Feb 19 20:01:39 compute-0 setroubleshoot[157853]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Feb 19 20:01:39 compute-0 setroubleshoot[157853]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l b7483c53-0360-4506-a60f-49199e54be1c
Feb 19 20:01:39 compute-0 rsyslogd[1014]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 19 20:01:39 compute-0 setroubleshoot[157853]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Feb 19 20:01:39 compute-0 sudo[158108]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:40 compute-0 sudo[158333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugreyqmfxhhekgedlzkgwojuzsnwrmib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531299.8402402-1064-218556013255285/AnsiballZ_systemd.py'
Feb 19 20:01:40 compute-0 sudo[158333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:40 compute-0 python3.9[158336]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 19 20:01:40 compute-0 systemd[1]: Reloading.
Feb 19 20:01:40 compute-0 systemd-sysv-generator[158363]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:01:40 compute-0 systemd-rc-local-generator[158355]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:01:40 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Feb 19 20:01:40 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Feb 19 20:01:40 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Feb 19 20:01:40 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Feb 19 20:01:40 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Feb 19 20:01:40 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Feb 19 20:01:40 compute-0 systemd[1]: Starting libvirt secret daemon...
Feb 19 20:01:40 compute-0 systemd[1]: Started libvirt secret daemon.
Feb 19 20:01:40 compute-0 sudo[158333]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:41 compute-0 sudo[158554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pruzhyalezhexqjwphtejpdyfutfsiul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531300.9051144-1101-267609045414335/AnsiballZ_file.py'
Feb 19 20:01:41 compute-0 sudo[158554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:41 compute-0 python3.9[158557]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:41 compute-0 sudo[158554]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:41 compute-0 sudo[158707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alitgedrdpqhmipyobtibdfilkfzujnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531301.4974964-1109-66248211591720/AnsiballZ_find.py'
Feb 19 20:01:41 compute-0 sudo[158707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:41 compute-0 python3.9[158710]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 19 20:01:41 compute-0 sudo[158707]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:42 compute-0 sudo[158860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viikxkdvjlddygddyoepnrdcmpvrlelt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531302.3883228-1123-106477129228766/AnsiballZ_stat.py'
Feb 19 20:01:42 compute-0 sudo[158860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:42 compute-0 python3.9[158863]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:01:42 compute-0 sudo[158860]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:43 compute-0 sudo[158984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vobedyhtlfpzgzrshmzvbumwljusizwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531302.3883228-1123-106477129228766/AnsiballZ_copy.py'
Feb 19 20:01:43 compute-0 sudo[158984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:43 compute-0 python3.9[158987]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1771531302.3883228-1123-106477129228766/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:43 compute-0 sudo[158984]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:44 compute-0 podman[159056]: 2026-02-19 20:01:44.422540397 +0000 UTC m=+0.105274327 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 19 20:01:44 compute-0 sudo[159163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-foskniilaqsucjpsilwqjdfpkkijrxab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531304.2633574-1139-27998262533640/AnsiballZ_file.py'
Feb 19 20:01:44 compute-0 sudo[159163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:44 compute-0 python3.9[159166]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:44 compute-0 sudo[159163]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:44 compute-0 sudo[159316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyfhxwvtfsiznkafxdubgjehccnwnkvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531304.786854-1147-8096069334982/AnsiballZ_stat.py'
Feb 19 20:01:44 compute-0 sudo[159316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:45 compute-0 python3.9[159319]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:01:45 compute-0 sudo[159316]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:45 compute-0 sudo[159395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osgnldesbkdmilhlmvjdmetmwwxkmbqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531304.786854-1147-8096069334982/AnsiballZ_file.py'
Feb 19 20:01:45 compute-0 sudo[159395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:45 compute-0 python3.9[159398]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:45 compute-0 sudo[159395]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:45 compute-0 sudo[159548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwaiaythlfkqlhadgcxhnvgfaxguytdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531305.664132-1159-222903343968568/AnsiballZ_stat.py'
Feb 19 20:01:45 compute-0 sudo[159548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:46 compute-0 python3.9[159551]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:01:46 compute-0 sudo[159548]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:46 compute-0 sudo[159627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kobrengsorpjlatzxubspuzbtnqhsoma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531305.664132-1159-222903343968568/AnsiballZ_file.py'
Feb 19 20:01:46 compute-0 sudo[159627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:46 compute-0 python3.9[159630]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.i45r7pvy recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:46 compute-0 sudo[159627]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:46 compute-0 sudo[159780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbmopedldbbmhueuuuoaxkzvdrelsdoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531306.6121414-1171-160953297270780/AnsiballZ_stat.py'
Feb 19 20:01:46 compute-0 sudo[159780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:46 compute-0 python3.9[159783]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:01:47 compute-0 sudo[159780]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:47 compute-0 sudo[159859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcwfhldntelzkviplqwamlnoejvjhcvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531306.6121414-1171-160953297270780/AnsiballZ_file.py'
Feb 19 20:01:47 compute-0 sudo[159859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:47 compute-0 python3.9[159862]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:47 compute-0 sudo[159859]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:47 compute-0 sudo[160012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvzwkfpelvqgclbbznbnykikkddabbsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531307.698519-1184-62140057828721/AnsiballZ_command.py'
Feb 19 20:01:47 compute-0 sudo[160012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:48 compute-0 python3.9[160015]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 20:01:48 compute-0 sudo[160012]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:48 compute-0 sudo[160166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffryphrmvjomoxvepjdwiafkyvpdbfwx ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771531308.3006427-1192-259352923714227/AnsiballZ_edpm_nftables_from_files.py'
Feb 19 20:01:48 compute-0 sudo[160166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:48 compute-0 python3[160169]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb 19 20:01:48 compute-0 sudo[160166]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:49 compute-0 sudo[160319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqmqjalcioapepbtfyubhyatxlskzcdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531309.017291-1200-134027181136816/AnsiballZ_stat.py'
Feb 19 20:01:49 compute-0 sudo[160319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:49 compute-0 python3.9[160322]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:01:49 compute-0 sudo[160319]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:49 compute-0 sudo[160398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjmagrwijmxbidbqdzvhqtabpxwnonze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531309.017291-1200-134027181136816/AnsiballZ_file.py'
Feb 19 20:01:49 compute-0 sudo[160398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:49 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Feb 19 20:01:49 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Feb 19 20:01:49 compute-0 python3.9[160401]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:49 compute-0 sudo[160398]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:50 compute-0 sudo[160551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmofmzgnbghhtjalrdemcmasfayctuva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531309.9837728-1212-185043530962049/AnsiballZ_stat.py'
Feb 19 20:01:50 compute-0 sudo[160551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:50 compute-0 python3.9[160554]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:01:50 compute-0 sudo[160551]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:50 compute-0 sudo[160677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnirvzmtiggzsztgngyfaiyfevuripxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531309.9837728-1212-185043530962049/AnsiballZ_copy.py'
Feb 19 20:01:50 compute-0 sudo[160677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:50 compute-0 python3.9[160680]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771531309.9837728-1212-185043530962049/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:50 compute-0 sudo[160677]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:51 compute-0 sudo[160830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwcvbwjwpeyisyndsgsdekebiibznumx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531311.0587513-1227-154896091719101/AnsiballZ_stat.py'
Feb 19 20:01:51 compute-0 sudo[160830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:51 compute-0 python3.9[160833]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:01:51 compute-0 sudo[160830]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:51 compute-0 sudo[160909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kiejqbxtkudfjkizjrjlajpbkaupzlcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531311.0587513-1227-154896091719101/AnsiballZ_file.py'
Feb 19 20:01:51 compute-0 sudo[160909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:51 compute-0 python3.9[160912]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:51 compute-0 sudo[160909]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:53 compute-0 sudo[161062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocavzlfdxdlmbftjtixomvwaecxpmmeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531311.9744568-1239-121889168599264/AnsiballZ_stat.py'
Feb 19 20:01:53 compute-0 sudo[161062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:53 compute-0 python3.9[161065]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:01:53 compute-0 sudo[161062]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:53 compute-0 sudo[161141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rccyjpqbcxuqwmvayzwovcwcuocofkza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531311.9744568-1239-121889168599264/AnsiballZ_file.py'
Feb 19 20:01:53 compute-0 sudo[161141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:53 compute-0 python3.9[161144]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:53 compute-0 sudo[161141]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:54 compute-0 sudo[161294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rifaejslvfceuarqvgpsoqtjgkevwifd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531313.8672802-1251-276191218746953/AnsiballZ_stat.py'
Feb 19 20:01:54 compute-0 sudo[161294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:54 compute-0 python3.9[161297]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:01:54 compute-0 sudo[161294]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:54 compute-0 sudo[161420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwckbbljdxbnzjmpqysjnonostfakmrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531313.8672802-1251-276191218746953/AnsiballZ_copy.py'
Feb 19 20:01:54 compute-0 sudo[161420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:54 compute-0 python3.9[161425]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771531313.8672802-1251-276191218746953/.source.nft follow=False _original_basename=ruleset.j2 checksum=8a12d4eb5149b6e500230381c1359a710881e9b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:54 compute-0 sudo[161420]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:55 compute-0 sudo[161575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chfaerszjnjftlaqjttwzghpuogaihlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531315.0632274-1266-160661645615663/AnsiballZ_file.py'
Feb 19 20:01:55 compute-0 sudo[161575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:55 compute-0 python3.9[161578]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:55 compute-0 sudo[161575]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:55 compute-0 sudo[161728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aibriaswkeveqfvrzqbhafpmxxdpobkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531315.6663146-1274-200137034222551/AnsiballZ_command.py'
Feb 19 20:01:55 compute-0 sudo[161728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:56 compute-0 python3.9[161731]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 20:01:56 compute-0 sudo[161728]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:56 compute-0 sshd-session[161421]: Received disconnect from 158.174.210.161 port 17119:11: Bye Bye [preauth]
Feb 19 20:01:56 compute-0 sshd-session[161421]: Disconnected from authenticating user root 158.174.210.161 port 17119 [preauth]
Feb 19 20:01:56 compute-0 sudo[161884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qubuaujjpjfblricflcwswidzoypxrnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531316.247286-1282-109528558423844/AnsiballZ_blockinfile.py'
Feb 19 20:01:56 compute-0 sudo[161884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:56 compute-0 python3.9[161887]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:56 compute-0 sudo[161884]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:57 compute-0 sudo[162037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osmmkhxzzifqykgqagowvinrpvcedjwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531317.0520809-1291-188126233212874/AnsiballZ_command.py'
Feb 19 20:01:57 compute-0 sudo[162037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:57 compute-0 python3.9[162040]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 20:01:57 compute-0 sudo[162037]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:57 compute-0 sudo[162191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epgvvovcsrcwsimeqreznrktgepioyuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531317.6448011-1299-21437427988338/AnsiballZ_stat.py'
Feb 19 20:01:57 compute-0 sudo[162191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:58 compute-0 python3.9[162194]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 20:01:58 compute-0 sudo[162191]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:58 compute-0 sudo[162346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mooahianerclucruxashfucrufidokfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531318.184131-1307-144639299849127/AnsiballZ_command.py'
Feb 19 20:01:58 compute-0 sudo[162346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:58 compute-0 python3.9[162349]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 20:01:58 compute-0 sudo[162346]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:59 compute-0 sudo[162502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwgbxssqpnxikrmzivheidiocythdhys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531318.7854385-1315-162722338250240/AnsiballZ_file.py'
Feb 19 20:01:59 compute-0 sudo[162502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:59 compute-0 python3.9[162505]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:01:59 compute-0 sudo[162502]: pam_unix(sudo:session): session closed for user root
Feb 19 20:01:59 compute-0 sudo[162655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thtfypppyufqqubtrupvsljuxynfqklk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531319.3670526-1323-230830857500003/AnsiballZ_stat.py'
Feb 19 20:01:59 compute-0 sudo[162655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:01:59 compute-0 python3.9[162658]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:01:59 compute-0 sudo[162655]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:00 compute-0 sudo[162779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opejrshndvqpuocoxmhdbzkaflhmewcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531319.3670526-1323-230830857500003/AnsiballZ_copy.py'
Feb 19 20:02:00 compute-0 sudo[162779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:00 compute-0 python3.9[162782]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771531319.3670526-1323-230830857500003/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:02:00 compute-0 sudo[162779]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:00 compute-0 sudo[162932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfocxxrkdpulfhsdqqlqmorzroehippj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531320.7229846-1338-138230360374296/AnsiballZ_stat.py'
Feb 19 20:02:00 compute-0 sudo[162932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:01 compute-0 python3.9[162935]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:02:01 compute-0 sudo[162932]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:01 compute-0 sudo[163056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irmajeroshpoldrmzrclwbgrbcdloomz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531320.7229846-1338-138230360374296/AnsiballZ_copy.py'
Feb 19 20:02:01 compute-0 sudo[163056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:01 compute-0 python3.9[163059]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771531320.7229846-1338-138230360374296/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:02:01 compute-0 sudo[163056]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:02 compute-0 sudo[163223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpsbifsxatbpjmetxqhutgsddegogxop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531321.7624154-1353-102324589085106/AnsiballZ_stat.py'
Feb 19 20:02:02 compute-0 sudo[163223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:02 compute-0 podman[163183]: 2026-02-19 20:02:02.04039811 +0000 UTC m=+0.080888839 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 19 20:02:02 compute-0 python3.9[163232]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:02:02 compute-0 sudo[163223]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:02 compute-0 sudo[163353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryzixdmbpxcdowqyozqfhgvcxmpghtmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531321.7624154-1353-102324589085106/AnsiballZ_copy.py'
Feb 19 20:02:02 compute-0 sudo[163353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:02 compute-0 python3.9[163356]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771531321.7624154-1353-102324589085106/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:02:02 compute-0 sudo[163353]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:03 compute-0 sudo[163506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evhisgzcmpdlosasghanhzsdfkacjitu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531322.9234898-1368-167739532213772/AnsiballZ_systemd.py'
Feb 19 20:02:03 compute-0 sudo[163506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:03 compute-0 python3.9[163509]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 20:02:03 compute-0 systemd[1]: Reloading.
Feb 19 20:02:03 compute-0 systemd-rc-local-generator[163532]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:02:03 compute-0 systemd-sysv-generator[163540]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:02:03 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Feb 19 20:02:03 compute-0 sudo[163506]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:04 compute-0 sudo[163705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnhffxdslthhrfsxkivzemfipxaupdyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531323.9677715-1376-96280815205442/AnsiballZ_systemd.py'
Feb 19 20:02:04 compute-0 sudo[163705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:04 compute-0 python3.9[163708]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Feb 19 20:02:04 compute-0 systemd[1]: Reloading.
Feb 19 20:02:04 compute-0 systemd-rc-local-generator[163733]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:02:04 compute-0 systemd-sysv-generator[163739]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:02:04 compute-0 systemd[1]: Reloading.
Feb 19 20:02:04 compute-0 systemd-rc-local-generator[163781]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:02:04 compute-0 systemd-sysv-generator[163784]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:02:05 compute-0 sudo[163705]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:05 compute-0 sshd-session[108730]: Connection closed by 192.168.122.30 port 54588
Feb 19 20:02:05 compute-0 sshd-session[108727]: pam_unix(sshd:session): session closed for user zuul
Feb 19 20:02:05 compute-0 systemd[1]: session-22.scope: Deactivated successfully.
Feb 19 20:02:05 compute-0 systemd[1]: session-22.scope: Consumed 2min 46.419s CPU time.
Feb 19 20:02:05 compute-0 systemd-logind[810]: Session 22 logged out. Waiting for processes to exit.
Feb 19 20:02:05 compute-0 systemd-logind[810]: Removed session 22.
Feb 19 20:02:11 compute-0 sshd-session[163819]: Accepted publickey for zuul from 192.168.122.30 port 37108 ssh2: ECDSA SHA256:U7+XUhHIIKxaxeCtrtx4n7poU9CMVA2TmDaaiHbw4x0
Feb 19 20:02:11 compute-0 systemd-logind[810]: New session 23 of user zuul.
Feb 19 20:02:11 compute-0 systemd[1]: Started Session 23 of User zuul.
Feb 19 20:02:11 compute-0 sshd-session[163819]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 19 20:02:12 compute-0 python3.9[163972]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 19 20:02:13 compute-0 python3.9[164126]: ansible-ansible.builtin.service_facts Invoked
Feb 19 20:02:13 compute-0 network[164143]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 19 20:02:13 compute-0 network[164144]: 'network-scripts' will be removed from distribution in near future.
Feb 19 20:02:13 compute-0 network[164145]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 19 20:02:14 compute-0 podman[164185]: 2026-02-19 20:02:14.533959257 +0000 UTC m=+0.073079197 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 19 20:02:16 compute-0 sudo[164440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avrfzqtlvfgbfiicowlckurhpukliomd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531335.8043678-42-156382764750438/AnsiballZ_setup.py'
Feb 19 20:02:16 compute-0 sudo[164440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:16 compute-0 python3.9[164443]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 19 20:02:16 compute-0 sudo[164440]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:17 compute-0 sudo[164525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nucugdmrmojbdqxzwgxhlsaugbubkdep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531335.8043678-42-156382764750438/AnsiballZ_dnf.py'
Feb 19 20:02:17 compute-0 sudo[164525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:17 compute-0 python3.9[164528]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 19 20:02:22 compute-0 sudo[164525]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:23 compute-0 sudo[164679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yafisnftwrjudpxszhrcaehjuhxdplvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531343.185597-54-131693036040563/AnsiballZ_stat.py'
Feb 19 20:02:23 compute-0 sudo[164679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:23 compute-0 python3.9[164682]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 20:02:23 compute-0 sudo[164679]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:24 compute-0 sudo[164832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmydoivdxhspxawdgdsiiwbisgsmsxzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531343.9297245-64-277333739331792/AnsiballZ_command.py'
Feb 19 20:02:24 compute-0 sudo[164832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:24 compute-0 python3.9[164835]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 20:02:24 compute-0 sudo[164832]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:24 compute-0 sudo[164986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpykleaoommkzgisthmhuehlmgbrrlvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531344.7442622-74-281194663279219/AnsiballZ_stat.py'
Feb 19 20:02:24 compute-0 sudo[164986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:25 compute-0 python3.9[164989]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 20:02:25 compute-0 sudo[164986]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:25 compute-0 sudo[165139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwwgiepkcoupeqmxrgchihcvftrhmsgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531345.2964003-82-116298561723909/AnsiballZ_command.py'
Feb 19 20:02:25 compute-0 sudo[165139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:25 compute-0 python3.9[165142]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 20:02:25 compute-0 sudo[165139]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:26 compute-0 sudo[165293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnlulwnjlrjrjlwudjqitutivwnpjztk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531345.8336794-90-171967492620249/AnsiballZ_stat.py'
Feb 19 20:02:26 compute-0 sudo[165293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:26 compute-0 python3.9[165296]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:02:26 compute-0 sudo[165293]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:26 compute-0 sudo[165417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmvzchfhqkynkqnmlfkwevtmhcwftoev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531345.8336794-90-171967492620249/AnsiballZ_copy.py'
Feb 19 20:02:26 compute-0 sudo[165417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:26 compute-0 python3.9[165420]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771531345.8336794-90-171967492620249/.source.iscsi _original_basename=.abr8a3g8 follow=False checksum=8c7c90a4bbd3c4711fe4d3b1c82bc9dac818bba0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:02:26 compute-0 sudo[165417]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:27 compute-0 sudo[165570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cihxexgqliysqhkwnicijvpfxofyynxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531346.9879248-105-66583932604096/AnsiballZ_file.py'
Feb 19 20:02:27 compute-0 sudo[165570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:27 compute-0 python3.9[165573]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:02:27 compute-0 sudo[165570]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:28 compute-0 sudo[165725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkxheaetcbvnelgcbqianmntkxfcuhyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531347.8822272-113-269080336647840/AnsiballZ_lineinfile.py'
Feb 19 20:02:28 compute-0 sudo[165725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:28 compute-0 python3.9[165728]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:02:28 compute-0 sudo[165725]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:29 compute-0 sudo[165878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otiuzijnclmcavxuzceitrnsufxjoowt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531348.6661375-122-126789712762294/AnsiballZ_systemd_service.py'
Feb 19 20:02:29 compute-0 sudo[165878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:29 compute-0 python3.9[165881]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 20:02:29 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Feb 19 20:02:29 compute-0 sudo[165878]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:29 compute-0 sshd-session[165598]: Received disconnect from 103.119.94.10 port 56796:11: Bye Bye [preauth]
Feb 19 20:02:29 compute-0 sshd-session[165598]: Disconnected from authenticating user root 103.119.94.10 port 56796 [preauth]
Feb 19 20:02:29 compute-0 sudo[166035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugmjdnahztzflszckuvkryfvbzerupvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531349.65757-130-257506016591173/AnsiballZ_systemd_service.py'
Feb 19 20:02:29 compute-0 sudo[166035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:30 compute-0 python3.9[166038]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 20:02:30 compute-0 systemd[1]: Reloading.
Feb 19 20:02:30 compute-0 systemd-sysv-generator[166070]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:02:30 compute-0 systemd-rc-local-generator[166066]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:02:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:02:30.406 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:02:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:02:30.407 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:02:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:02:30.407 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:02:30 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Feb 19 20:02:30 compute-0 systemd[1]: Starting Open-iSCSI...
Feb 19 20:02:30 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Feb 19 20:02:30 compute-0 systemd[1]: Started Open-iSCSI.
Feb 19 20:02:30 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Feb 19 20:02:30 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Feb 19 20:02:30 compute-0 sudo[166035]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:31 compute-0 python3.9[166244]: ansible-ansible.builtin.service_facts Invoked
Feb 19 20:02:31 compute-0 network[166261]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 19 20:02:31 compute-0 network[166262]: 'network-scripts' will be removed from distribution in near future.
Feb 19 20:02:31 compute-0 network[166263]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 19 20:02:32 compute-0 podman[166286]: 2026-02-19 20:02:32.123067241 +0000 UTC m=+0.051870245 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Feb 19 20:02:33 compute-0 sudo[166552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzqqqtrmmbmtzblyhjsonsufrnhqhsii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531353.5821116-153-42424403395527/AnsiballZ_dnf.py'
Feb 19 20:02:33 compute-0 sudo[166552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:34 compute-0 python3.9[166555]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 19 20:02:36 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 19 20:02:36 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 19 20:02:36 compute-0 systemd[1]: Reloading.
Feb 19 20:02:36 compute-0 systemd-rc-local-generator[166595]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:02:36 compute-0 systemd-sysv-generator[166604]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:02:36 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 19 20:02:37 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 19 20:02:37 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 19 20:02:37 compute-0 systemd[1]: run-r18214d799d464798b8268d19f948b267.service: Deactivated successfully.
Feb 19 20:02:37 compute-0 sudo[166552]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:37 compute-0 sudo[166885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjflvgqrfgileiemxglfztywzkbjihiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531357.525009-162-152703906838901/AnsiballZ_file.py'
Feb 19 20:02:37 compute-0 sudo[166885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:37 compute-0 python3.9[166888]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Feb 19 20:02:37 compute-0 sudo[166885]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:38 compute-0 sudo[167038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmqyhxgctpiynqmhaasbwpehqjmnnada ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531358.0970647-170-212203696732154/AnsiballZ_modprobe.py'
Feb 19 20:02:38 compute-0 sudo[167038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:38 compute-0 python3.9[167041]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Feb 19 20:02:38 compute-0 sudo[167038]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:39 compute-0 sudo[167195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfvibhwntlpfgnhhttjhsxuybzsljnht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531358.8697667-178-268221860287633/AnsiballZ_stat.py'
Feb 19 20:02:39 compute-0 sudo[167195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:39 compute-0 python3.9[167198]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:02:39 compute-0 sudo[167195]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:39 compute-0 sudo[167319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysxstmwfyykvoydogfisllcmhniqvcpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531358.8697667-178-268221860287633/AnsiballZ_copy.py'
Feb 19 20:02:39 compute-0 sudo[167319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:39 compute-0 python3.9[167322]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771531358.8697667-178-268221860287633/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:02:39 compute-0 sudo[167319]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:40 compute-0 sudo[167472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujzhcfqtguubgrlykvhbsdkistshmjkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531360.0135078-194-158467672584890/AnsiballZ_lineinfile.py'
Feb 19 20:02:40 compute-0 sudo[167472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:40 compute-0 python3.9[167475]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:02:40 compute-0 sudo[167472]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:41 compute-0 sudo[167625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxreyrvqmqmdsuqrkhnlwjiwgnlrlwbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531360.6324854-202-36541686959974/AnsiballZ_systemd.py'
Feb 19 20:02:41 compute-0 sudo[167625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:41 compute-0 python3.9[167628]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 19 20:02:41 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 19 20:02:41 compute-0 systemd[1]: Stopped Load Kernel Modules.
Feb 19 20:02:41 compute-0 systemd[1]: Stopping Load Kernel Modules...
Feb 19 20:02:41 compute-0 systemd[1]: Starting Load Kernel Modules...
Feb 19 20:02:41 compute-0 systemd[1]: Finished Load Kernel Modules.
Feb 19 20:02:41 compute-0 sudo[167625]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:41 compute-0 sudo[167782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgeeuiwopkyttaelumntlamjgxzrefzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531361.736982-210-247846979318676/AnsiballZ_command.py'
Feb 19 20:02:41 compute-0 sudo[167782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:42 compute-0 python3.9[167785]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 20:02:42 compute-0 sudo[167782]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:42 compute-0 sudo[167936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hybgcqryiqxnqvgzrqnawadakcccahwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531362.4380217-220-64448555884697/AnsiballZ_stat.py'
Feb 19 20:02:42 compute-0 sudo[167936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:42 compute-0 python3.9[167939]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 20:02:42 compute-0 sudo[167936]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:43 compute-0 sudo[168091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkqmthxeuinxgvvqniuwqlkiulbwnmza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531363.0893703-229-175136293444170/AnsiballZ_stat.py'
Feb 19 20:02:43 compute-0 sudo[168091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:43 compute-0 python3.9[168094]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:02:43 compute-0 sudo[168091]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:43 compute-0 sudo[168215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzdkzdoplnidpuvwnaqvgkzylscddomi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531363.0893703-229-175136293444170/AnsiballZ_copy.py'
Feb 19 20:02:43 compute-0 sudo[168215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:44 compute-0 python3.9[168218]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771531363.0893703-229-175136293444170/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:02:44 compute-0 sudo[168215]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:44 compute-0 sshd-session[168024]: Invalid user x from 125.31.2.160 port 40420
Feb 19 20:02:44 compute-0 sudo[168368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mplfahxzettmbcznolfbwrfudxaiwxij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531364.1873512-244-70360064849125/AnsiballZ_command.py'
Feb 19 20:02:44 compute-0 sudo[168368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:44 compute-0 sshd-session[168024]: Received disconnect from 125.31.2.160 port 40420:11: Bye Bye [preauth]
Feb 19 20:02:44 compute-0 sshd-session[168024]: Disconnected from invalid user x 125.31.2.160 port 40420 [preauth]
Feb 19 20:02:44 compute-0 python3.9[168371]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 20:02:45 compute-0 podman[168373]: 2026-02-19 20:02:45.408065291 +0000 UTC m=+0.094740045 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 19 20:02:45 compute-0 sudo[168368]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:46 compute-0 sudo[168549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdhbbgzyactneudrksbsmjmoqrjamhqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531365.7667785-252-207286346079173/AnsiballZ_lineinfile.py'
Feb 19 20:02:46 compute-0 sudo[168549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:46 compute-0 python3.9[168552]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:02:46 compute-0 sudo[168549]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:46 compute-0 sudo[168702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prlsdvcqginfpixaywqhhnhyiaaxjxuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531366.390904-260-177815587306380/AnsiballZ_replace.py'
Feb 19 20:02:46 compute-0 sudo[168702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:47 compute-0 python3.9[168705]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:02:47 compute-0 sudo[168702]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:47 compute-0 sudo[168855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkahmijupefzoqvnvfqqgselehpitoxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531367.2611036-268-8487509998064/AnsiballZ_replace.py'
Feb 19 20:02:47 compute-0 sudo[168855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:47 compute-0 python3.9[168858]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:02:47 compute-0 sudo[168855]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:48 compute-0 sudo[169008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwyrkfiwiuubtvtoyvjpmijlymviswxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531367.8163512-277-173602032343856/AnsiballZ_lineinfile.py'
Feb 19 20:02:48 compute-0 sudo[169008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:48 compute-0 python3.9[169011]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:02:48 compute-0 sudo[169008]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:48 compute-0 sudo[169161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkdepwotyzosnyzqvygxhantorpcinht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531368.3593626-277-21000216453189/AnsiballZ_lineinfile.py'
Feb 19 20:02:48 compute-0 sudo[169161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:48 compute-0 python3.9[169164]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:02:48 compute-0 sudo[169161]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:49 compute-0 sudo[169314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osgdhedhetyhqfbnfcblbwuyydaejnik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531368.8992326-277-79931789843114/AnsiballZ_lineinfile.py'
Feb 19 20:02:49 compute-0 sudo[169314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:49 compute-0 python3.9[169317]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:02:49 compute-0 sudo[169314]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:49 compute-0 sudo[169467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjmqxxeidvcpizkqtcnqofqthcozoqgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531369.4481633-277-123052431495707/AnsiballZ_lineinfile.py'
Feb 19 20:02:49 compute-0 sudo[169467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:49 compute-0 python3.9[169470]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:02:49 compute-0 sudo[169467]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:50 compute-0 sudo[169620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iivqkaqtaxwyqshvgycjpucejyxtfnkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531369.9924695-306-190325274807392/AnsiballZ_stat.py'
Feb 19 20:02:50 compute-0 sudo[169620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:50 compute-0 python3.9[169623]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 20:02:50 compute-0 sudo[169620]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:50 compute-0 sudo[169775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sehnyahcihqcrhbmvctcibuyfzglnvuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531370.5516932-314-19437806686712/AnsiballZ_command.py'
Feb 19 20:02:50 compute-0 sudo[169775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:50 compute-0 python3.9[169778]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 20:02:51 compute-0 sudo[169775]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:51 compute-0 sudo[169929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgfqnwqdxuaritarzmxqphrnggcwrrru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531371.2105927-323-109440060875820/AnsiballZ_systemd_service.py'
Feb 19 20:02:51 compute-0 sudo[169929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:51 compute-0 python3.9[169932]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 20:02:51 compute-0 systemd[1]: Listening on multipathd control socket.
Feb 19 20:02:51 compute-0 sudo[169929]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:52 compute-0 sudo[170086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdtytjqdzmqqzjrojosjandpbndextwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531371.9838724-331-220446159634898/AnsiballZ_systemd_service.py'
Feb 19 20:02:52 compute-0 sudo[170086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:52 compute-0 python3.9[170089]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 20:02:52 compute-0 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Feb 19 20:02:52 compute-0 udevadm[170094]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Feb 19 20:02:52 compute-0 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Feb 19 20:02:52 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Feb 19 20:02:52 compute-0 multipathd[170098]: --------start up--------
Feb 19 20:02:52 compute-0 multipathd[170098]: read /etc/multipath.conf
Feb 19 20:02:52 compute-0 multipathd[170098]: path checkers start up
Feb 19 20:02:52 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Feb 19 20:02:52 compute-0 sudo[170086]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:53 compute-0 sudo[170256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzivvqkrdqjyvwrusewfcmnyxscsphwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531373.0426004-343-77536408771809/AnsiballZ_file.py'
Feb 19 20:02:53 compute-0 sudo[170256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:53 compute-0 python3.9[170259]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Feb 19 20:02:53 compute-0 sudo[170256]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:53 compute-0 sudo[170409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atxerdrppikchkcdrdspbrgsjasxqbvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531373.6290555-351-61785733495273/AnsiballZ_modprobe.py'
Feb 19 20:02:53 compute-0 sudo[170409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:54 compute-0 python3.9[170412]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Feb 19 20:02:54 compute-0 kernel: Key type psk registered
Feb 19 20:02:54 compute-0 sudo[170409]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:54 compute-0 sudo[170571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkpuanmkvioqyxcwjghzbpunzvjrgrnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531374.2597787-359-61643529615410/AnsiballZ_stat.py'
Feb 19 20:02:54 compute-0 sudo[170571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:54 compute-0 python3.9[170574]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:02:54 compute-0 sudo[170571]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:54 compute-0 sudo[170695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rohqryishhhqofjgblzcimfsjljgvwyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531374.2597787-359-61643529615410/AnsiballZ_copy.py'
Feb 19 20:02:54 compute-0 sudo[170695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:55 compute-0 python3.9[170698]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771531374.2597787-359-61643529615410/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:02:55 compute-0 sudo[170695]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:55 compute-0 sudo[170848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbkcwxpjuwyrawfehaorujqksmflygmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531375.3514395-375-194397728122723/AnsiballZ_lineinfile.py'
Feb 19 20:02:55 compute-0 sudo[170848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:55 compute-0 python3.9[170851]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:02:55 compute-0 sudo[170848]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:56 compute-0 sudo[171001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfmufzeuefnhuliuafjvsawtcurgflxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531375.941704-383-154815782430944/AnsiballZ_systemd.py'
Feb 19 20:02:56 compute-0 sudo[171001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:56 compute-0 python3.9[171004]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 19 20:02:56 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 19 20:02:56 compute-0 systemd[1]: Stopped Load Kernel Modules.
Feb 19 20:02:56 compute-0 systemd[1]: Stopping Load Kernel Modules...
Feb 19 20:02:56 compute-0 systemd[1]: Starting Load Kernel Modules...
Feb 19 20:02:56 compute-0 systemd[1]: Finished Load Kernel Modules.
Feb 19 20:02:56 compute-0 sudo[171001]: pam_unix(sudo:session): session closed for user root
Feb 19 20:02:57 compute-0 sudo[171158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgvwjijoeisgyelruigelxpmmhzguwto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531377.0074844-391-234277537782739/AnsiballZ_dnf.py'
Feb 19 20:02:57 compute-0 sudo[171158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:02:57 compute-0 python3.9[171161]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 19 20:02:59 compute-0 systemd[1]: Reloading.
Feb 19 20:02:59 compute-0 systemd-rc-local-generator[171187]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:02:59 compute-0 systemd-sysv-generator[171192]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:02:59 compute-0 systemd[1]: Reloading.
Feb 19 20:02:59 compute-0 systemd-sysv-generator[171236]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:02:59 compute-0 systemd-rc-local-generator[171232]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:03:00 compute-0 systemd-logind[810]: Watching system buttons on /dev/input/event0 (Power Button)
Feb 19 20:03:00 compute-0 systemd-logind[810]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Feb 19 20:03:00 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 19 20:03:00 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 19 20:03:00 compute-0 systemd[1]: Reloading.
Feb 19 20:03:00 compute-0 systemd-rc-local-generator[171338]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:03:00 compute-0 systemd-sysv-generator[171341]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:03:00 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 19 20:03:00 compute-0 sudo[171158]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:01 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 19 20:03:01 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 19 20:03:01 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.103s CPU time.
Feb 19 20:03:01 compute-0 systemd[1]: run-rb20ce608388546168783b3ba6e6f6db3.service: Deactivated successfully.
Feb 19 20:03:01 compute-0 sudo[172653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlcuunbvlmpmlojsbujmwiychlfdutnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531381.053324-399-239152027269489/AnsiballZ_systemd_service.py'
Feb 19 20:03:01 compute-0 sudo[172653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:01 compute-0 python3.9[172656]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 19 20:03:01 compute-0 systemd[1]: Stopping Open-iSCSI...
Feb 19 20:03:01 compute-0 iscsid[166084]: iscsid shutting down.
Feb 19 20:03:01 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Feb 19 20:03:01 compute-0 systemd[1]: Stopped Open-iSCSI.
Feb 19 20:03:01 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Feb 19 20:03:01 compute-0 systemd[1]: Starting Open-iSCSI...
Feb 19 20:03:01 compute-0 systemd[1]: Started Open-iSCSI.
Feb 19 20:03:01 compute-0 sudo[172653]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:01 compute-0 sudo[172810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktyetccrbgjntrhcxtilgtvcxffkowly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531381.7434947-407-79708724558988/AnsiballZ_systemd_service.py'
Feb 19 20:03:01 compute-0 sudo[172810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:02 compute-0 python3.9[172813]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 19 20:03:02 compute-0 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Feb 19 20:03:02 compute-0 multipathd[170098]: exit (signal)
Feb 19 20:03:02 compute-0 multipathd[170098]: --------shut down-------
Feb 19 20:03:02 compute-0 systemd[1]: multipathd.service: Deactivated successfully.
Feb 19 20:03:02 compute-0 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Feb 19 20:03:02 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Feb 19 20:03:02 compute-0 podman[172815]: 2026-02-19 20:03:02.295497722 +0000 UTC m=+0.062109969 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Feb 19 20:03:02 compute-0 multipathd[172838]: --------start up--------
Feb 19 20:03:02 compute-0 multipathd[172838]: read /etc/multipath.conf
Feb 19 20:03:02 compute-0 multipathd[172838]: path checkers start up
Feb 19 20:03:02 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Feb 19 20:03:02 compute-0 sudo[172810]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:03 compute-0 python3.9[172996]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 19 20:03:03 compute-0 sudo[173150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jboviziizkskhzrmkzcerdofogxhpkar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531383.662848-425-53108193597941/AnsiballZ_file.py'
Feb 19 20:03:03 compute-0 sudo[173150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:04 compute-0 python3.9[173153]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:03:04 compute-0 sudo[173150]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:04 compute-0 sudo[173303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loeefsgcpwtzptoxlxbrxqiapibrgjcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531384.4102817-436-250913989293041/AnsiballZ_systemd_service.py'
Feb 19 20:03:04 compute-0 sudo[173303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:04 compute-0 python3.9[173306]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 19 20:03:04 compute-0 systemd[1]: Reloading.
Feb 19 20:03:04 compute-0 systemd-rc-local-generator[173329]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:03:04 compute-0 systemd-sysv-generator[173332]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:03:05 compute-0 sudo[173303]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:05 compute-0 python3.9[173498]: ansible-ansible.builtin.service_facts Invoked
Feb 19 20:03:05 compute-0 network[173515]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 19 20:03:05 compute-0 network[173516]: 'network-scripts' will be removed from distribution in near future.
Feb 19 20:03:05 compute-0 network[173517]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 19 20:03:09 compute-0 sudo[173788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgscsbniazvvyuacvykmlrqdgvfgcgab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531389.0725856-455-223927121016302/AnsiballZ_systemd_service.py'
Feb 19 20:03:09 compute-0 sudo[173788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:09 compute-0 python3.9[173791]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 20:03:09 compute-0 sudo[173788]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:10 compute-0 sudo[173942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jraclfsivgupiitlzshkspkiweqzhami ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531389.823689-455-212105944882048/AnsiballZ_systemd_service.py'
Feb 19 20:03:10 compute-0 sudo[173942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:10 compute-0 python3.9[173945]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 20:03:10 compute-0 sudo[173942]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:10 compute-0 sudo[174096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykwdsbjhskwtagbaqtxwflmeyfczmvzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531390.5377095-455-115241611570719/AnsiballZ_systemd_service.py'
Feb 19 20:03:10 compute-0 sudo[174096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:11 compute-0 python3.9[174099]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 20:03:11 compute-0 sudo[174096]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:11 compute-0 sudo[174250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzjjcvmxvvsozathnxwldmmamcberbdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531391.174866-455-209137783091516/AnsiballZ_systemd_service.py'
Feb 19 20:03:11 compute-0 sudo[174250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:11 compute-0 python3.9[174253]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 20:03:11 compute-0 sudo[174250]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:12 compute-0 sudo[174404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xldwsaxrzvechiqrnxrsottcemfsqobj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531391.8081143-455-211920227378391/AnsiballZ_systemd_service.py'
Feb 19 20:03:12 compute-0 sudo[174404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:12 compute-0 python3.9[174407]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 20:03:12 compute-0 sudo[174404]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:12 compute-0 sudo[174558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjakjqzibnbymyhwhkuqiijhpnqxrkyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531392.4678752-455-115581484525209/AnsiballZ_systemd_service.py'
Feb 19 20:03:12 compute-0 sudo[174558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:12 compute-0 python3.9[174561]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 20:03:13 compute-0 sudo[174558]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:13 compute-0 sudo[174712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwzkxovqgniygurmkqenfbiluozxbmpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531393.1282988-455-167047357907873/AnsiballZ_systemd_service.py'
Feb 19 20:03:13 compute-0 sudo[174712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:13 compute-0 python3.9[174715]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 20:03:13 compute-0 sudo[174712]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:13 compute-0 sudo[174866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuidwtxeuaimoyohzmrsmdwbtbudhcpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531393.7578359-455-268386419027141/AnsiballZ_systemd_service.py'
Feb 19 20:03:13 compute-0 sudo[174866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:14 compute-0 python3.9[174869]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 20:03:14 compute-0 sudo[174866]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:14 compute-0 sudo[175020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyofhyubsvuvkpgmcvdlsvvsxfxasmjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531394.5826523-514-235717273181423/AnsiballZ_file.py'
Feb 19 20:03:14 compute-0 sudo[175020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:15 compute-0 python3.9[175023]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:03:15 compute-0 sudo[175020]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:15 compute-0 sudo[175173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gowxgkopkpivtuntakadfvgduihocnfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531395.1298604-514-87128567570210/AnsiballZ_file.py'
Feb 19 20:03:15 compute-0 sudo[175173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:15 compute-0 python3.9[175176]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:03:15 compute-0 sudo[175173]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:15 compute-0 podman[175177]: 2026-02-19 20:03:15.625091113 +0000 UTC m=+0.059741897 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 19 20:03:15 compute-0 sudo[175352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgrlggvmfmrxyusfselcmbbmaiykrvap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531395.6493435-514-27467402098030/AnsiballZ_file.py'
Feb 19 20:03:15 compute-0 sudo[175352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:16 compute-0 python3.9[175355]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:03:16 compute-0 sudo[175352]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:16 compute-0 sudo[175505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddisslbvxznlzzqqhzckneqnqyfvmpdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531396.1802936-514-31981209727873/AnsiballZ_file.py'
Feb 19 20:03:16 compute-0 sudo[175505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:16 compute-0 python3.9[175508]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:03:16 compute-0 sudo[175505]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:16 compute-0 sudo[175658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stgopkxqpwxnqnjuiwgojewtehajmydl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531396.6807384-514-140416132184156/AnsiballZ_file.py'
Feb 19 20:03:16 compute-0 sudo[175658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:17 compute-0 python3.9[175661]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:03:17 compute-0 sudo[175658]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:17 compute-0 sudo[175811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajzmerknwwosmqcrukasptaxjtrmvbzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531397.2903638-514-17252684858257/AnsiballZ_file.py'
Feb 19 20:03:17 compute-0 sudo[175811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:17 compute-0 python3.9[175814]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:03:17 compute-0 sudo[175811]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:18 compute-0 sudo[175964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rblnaoogymlghlgwzqklcqzxhrchsynz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531397.83098-514-40961324308785/AnsiballZ_file.py'
Feb 19 20:03:18 compute-0 sudo[175964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:18 compute-0 python3.9[175967]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:03:18 compute-0 sudo[175964]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:18 compute-0 sudo[176117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lklwtteqpyaknnigzvfokgnvectkiokq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531398.3937337-514-95149107630755/AnsiballZ_file.py'
Feb 19 20:03:18 compute-0 sudo[176117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:18 compute-0 python3.9[176120]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:03:18 compute-0 sudo[176117]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:19 compute-0 sudo[176270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgjafvmgtqneauwmjhsgamhosqnnobdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531398.9497564-571-110608635822184/AnsiballZ_file.py'
Feb 19 20:03:19 compute-0 sudo[176270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:19 compute-0 python3.9[176273]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:03:19 compute-0 sudo[176270]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:19 compute-0 sudo[176423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qprlxzbwcuxdpupobymuhybhrwtvlftj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531399.4896023-571-132875341647961/AnsiballZ_file.py'
Feb 19 20:03:19 compute-0 sudo[176423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:19 compute-0 python3.9[176426]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:03:19 compute-0 sudo[176423]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:20 compute-0 sudo[176576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbpxndsefcoybqwdtapgmwqdcxdfcsqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531399.9657729-571-248595092923807/AnsiballZ_file.py'
Feb 19 20:03:20 compute-0 sudo[176576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:20 compute-0 python3.9[176579]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:03:20 compute-0 sudo[176576]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:20 compute-0 sudo[176731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqrefhjgpyewilwxozwijehkzafnyaft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531400.527726-571-243352294787212/AnsiballZ_file.py'
Feb 19 20:03:20 compute-0 sudo[176731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:20 compute-0 python3.9[176734]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:03:20 compute-0 sudo[176731]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:21 compute-0 sudo[176884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdfcoxpdqyeuwvahqvnrwaaktodguxkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531401.0735517-571-27313984673758/AnsiballZ_file.py'
Feb 19 20:03:21 compute-0 sudo[176884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:21 compute-0 python3.9[176887]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:03:21 compute-0 sudo[176884]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:21 compute-0 sshd-session[176600]: Invalid user httpd from 103.154.77.48 port 39804
Feb 19 20:03:22 compute-0 sshd-session[176600]: Received disconnect from 103.154.77.48 port 39804:11: Bye Bye [preauth]
Feb 19 20:03:22 compute-0 sshd-session[176600]: Disconnected from invalid user httpd 103.154.77.48 port 39804 [preauth]
Feb 19 20:03:22 compute-0 sudo[177037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmvdgjuejpipfrphwxhtrqpsrppcsvfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531401.8484044-571-138202810294853/AnsiballZ_file.py'
Feb 19 20:03:22 compute-0 sudo[177037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:22 compute-0 python3.9[177040]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:03:22 compute-0 sudo[177037]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:22 compute-0 sudo[177190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqeuyaywtbstizfhbphnnvwugrtdtxqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531402.427474-571-50280369124001/AnsiballZ_file.py'
Feb 19 20:03:22 compute-0 sudo[177190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:22 compute-0 python3.9[177193]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:03:22 compute-0 sudo[177190]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:23 compute-0 sudo[177343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmzooyshhjvesgvqprrmomyoxbxtyqje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531402.966281-571-38741725561941/AnsiballZ_file.py'
Feb 19 20:03:23 compute-0 sudo[177343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:23 compute-0 python3.9[177346]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:03:23 compute-0 sudo[177343]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:23 compute-0 sudo[177496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvwxfdxwvjgqghdkiborcghfqghbesxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531403.5866354-629-91440137082203/AnsiballZ_command.py'
Feb 19 20:03:23 compute-0 sudo[177496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:23 compute-0 python3.9[177499]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 20:03:23 compute-0 sudo[177496]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:24 compute-0 python3.9[177651]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 19 20:03:25 compute-0 sudo[177801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luljbnthdxhemprspxwqbvpuvrxyasel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531404.9413733-647-189602241657631/AnsiballZ_systemd_service.py'
Feb 19 20:03:25 compute-0 sudo[177801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:25 compute-0 python3.9[177804]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 19 20:03:25 compute-0 systemd[1]: Reloading.
Feb 19 20:03:25 compute-0 systemd-rc-local-generator[177827]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:03:25 compute-0 systemd-sysv-generator[177830]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:03:25 compute-0 sudo[177801]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:26 compute-0 sudo[177996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twkmlmofcfjeqkqbdowozwrprvumdiuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531405.847031-655-127692734309673/AnsiballZ_command.py'
Feb 19 20:03:26 compute-0 sudo[177996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:26 compute-0 python3.9[177999]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 20:03:26 compute-0 sudo[177996]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:26 compute-0 sudo[178150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjehwnxvtcboljyhbopclcjozeueusqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531406.421185-655-124168733028170/AnsiballZ_command.py'
Feb 19 20:03:26 compute-0 sudo[178150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:26 compute-0 python3.9[178153]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 20:03:26 compute-0 sudo[178150]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:27 compute-0 sudo[178304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxsczzyfbudinuvbpbkioklkferjhadr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531406.9722192-655-145076972400960/AnsiballZ_command.py'
Feb 19 20:03:27 compute-0 sudo[178304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:27 compute-0 python3.9[178307]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 20:03:27 compute-0 sudo[178304]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:27 compute-0 sudo[178458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzuzbzqfrcmgcrowyzgkqegvidwflrxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531407.4736574-655-63670947574880/AnsiballZ_command.py'
Feb 19 20:03:27 compute-0 sudo[178458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:27 compute-0 python3.9[178461]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 20:03:27 compute-0 sudo[178458]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:28 compute-0 sudo[178612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fibotakonfhwmukhpetawdhqlwawoxmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531407.9850414-655-145039925271149/AnsiballZ_command.py'
Feb 19 20:03:28 compute-0 sudo[178612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:28 compute-0 python3.9[178615]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 20:03:28 compute-0 sudo[178612]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:28 compute-0 sudo[178766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydnpupfknlhannneaenjgibgnesostyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531408.5578864-655-191144820919129/AnsiballZ_command.py'
Feb 19 20:03:28 compute-0 sudo[178766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:28 compute-0 python3.9[178769]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 20:03:28 compute-0 sudo[178766]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:29 compute-0 sudo[178920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqnpjyjbzaltleohrzrhllqvccdrytnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531409.0933623-655-191977779466148/AnsiballZ_command.py'
Feb 19 20:03:29 compute-0 sudo[178920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:29 compute-0 python3.9[178923]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 20:03:29 compute-0 sudo[178920]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:29 compute-0 sudo[179074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmcqpjoyeiixlxpgudzlkjeubdwevszc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531409.6242933-655-37738486178188/AnsiballZ_command.py'
Feb 19 20:03:29 compute-0 sudo[179074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:30 compute-0 python3.9[179077]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 20:03:30 compute-0 sudo[179074]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:03:30.407 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:03:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:03:30.408 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:03:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:03:30.408 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:03:31 compute-0 sudo[179228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmmzcygpjhrxkpqiiwqftliwamkmotzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531410.9893014-734-48203436877108/AnsiballZ_file.py'
Feb 19 20:03:31 compute-0 sudo[179228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:31 compute-0 python3.9[179231]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:03:31 compute-0 sudo[179228]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:31 compute-0 sudo[179381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fajljmpaxqlucujnelwqbopbkpvtqnos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531411.583706-734-179437130957003/AnsiballZ_file.py'
Feb 19 20:03:31 compute-0 sudo[179381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:31 compute-0 python3.9[179384]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:03:31 compute-0 sudo[179381]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:32 compute-0 sudo[179545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njuzkdcunklwvqixiunutoaprjirpffx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531412.1354167-749-209998156521304/AnsiballZ_file.py'
Feb 19 20:03:32 compute-0 sudo[179545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:32 compute-0 podman[179508]: 2026-02-19 20:03:32.424031556 +0000 UTC m=+0.076040545 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:03:32 compute-0 python3.9[179549]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:03:32 compute-0 sudo[179545]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:32 compute-0 sudo[179705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iovqrwnliqamwbydusnimesoiwdpzszs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531412.70712-749-238663084352756/AnsiballZ_file.py'
Feb 19 20:03:32 compute-0 sudo[179705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:33 compute-0 python3.9[179708]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:03:33 compute-0 sudo[179705]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:33 compute-0 sudo[179858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpdocrhkljxpsnwjakofknlevlujluar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531413.3054976-749-97425000481562/AnsiballZ_file.py'
Feb 19 20:03:33 compute-0 sudo[179858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:33 compute-0 python3.9[179861]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:03:33 compute-0 sudo[179858]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:34 compute-0 sudo[180011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vksdcnavhsndridjmmmmeydqrdgfdedk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531413.9712691-749-888049952744/AnsiballZ_file.py'
Feb 19 20:03:34 compute-0 sudo[180011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:34 compute-0 python3.9[180014]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:03:34 compute-0 sudo[180011]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:34 compute-0 sudo[180164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvnsakuchredgjfydbdotpdruymotaxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531414.6160264-749-22561249498021/AnsiballZ_file.py'
Feb 19 20:03:34 compute-0 sudo[180164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:35 compute-0 python3.9[180167]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:03:35 compute-0 sudo[180164]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:35 compute-0 sudo[180317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avwhobpdrtyxkkbqrsfzlpupqodtjzib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531415.1530998-749-118601795100193/AnsiballZ_file.py'
Feb 19 20:03:35 compute-0 sudo[180317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:35 compute-0 python3.9[180320]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:03:35 compute-0 sudo[180317]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:35 compute-0 sudo[180470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydnbujjepsljzbqtmejwnqrfrkadhbbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531415.7618933-749-146756261549540/AnsiballZ_file.py'
Feb 19 20:03:35 compute-0 sudo[180470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:36 compute-0 python3.9[180473]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:03:36 compute-0 sudo[180470]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:37 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Feb 19 20:03:38 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Feb 19 20:03:39 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Feb 19 20:03:40 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Feb 19 20:03:42 compute-0 sudo[180627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tebfxuqjtwclgnbgagkutpgiqcpokcqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531421.764141-958-259163660783323/AnsiballZ_getent.py'
Feb 19 20:03:42 compute-0 sudo[180627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:42 compute-0 python3.9[180630]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Feb 19 20:03:42 compute-0 sudo[180627]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:43 compute-0 sudo[180781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzaafmiyaojdcjjwpxmgajenmnjtcxjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531422.589724-966-172975992441066/AnsiballZ_group.py'
Feb 19 20:03:43 compute-0 sudo[180781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:43 compute-0 python3.9[180784]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 19 20:03:43 compute-0 groupadd[180785]: group added to /etc/group: name=nova, GID=42436
Feb 19 20:03:43 compute-0 groupadd[180785]: group added to /etc/gshadow: name=nova
Feb 19 20:03:43 compute-0 groupadd[180785]: new group: name=nova, GID=42436
Feb 19 20:03:43 compute-0 sudo[180781]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:44 compute-0 sudo[180940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-miwckcqegpuplyicofpbiplrvomazkdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531423.6941068-974-170085875156163/AnsiballZ_user.py'
Feb 19 20:03:44 compute-0 sudo[180940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:44 compute-0 python3.9[180943]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb 19 20:03:44 compute-0 useradd[180945]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/1
Feb 19 20:03:44 compute-0 useradd[180945]: add 'nova' to group 'libvirt'
Feb 19 20:03:44 compute-0 useradd[180945]: add 'nova' to shadow group 'libvirt'
Feb 19 20:03:44 compute-0 sudo[180940]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:45 compute-0 sshd-session[180976]: Accepted publickey for zuul from 192.168.122.30 port 60310 ssh2: ECDSA SHA256:U7+XUhHIIKxaxeCtrtx4n7poU9CMVA2TmDaaiHbw4x0
Feb 19 20:03:45 compute-0 systemd-logind[810]: New session 24 of user zuul.
Feb 19 20:03:45 compute-0 systemd[1]: Started Session 24 of User zuul.
Feb 19 20:03:45 compute-0 sshd-session[180976]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 19 20:03:45 compute-0 podman[180979]: 2026-02-19 20:03:45.759102596 +0000 UTC m=+0.088045100 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller)
Feb 19 20:03:45 compute-0 sshd-session[180985]: Received disconnect from 192.168.122.30 port 60310:11: disconnected by user
Feb 19 20:03:45 compute-0 sshd-session[180985]: Disconnected from user zuul 192.168.122.30 port 60310
Feb 19 20:03:45 compute-0 sshd-session[180976]: pam_unix(sshd:session): session closed for user zuul
Feb 19 20:03:45 compute-0 systemd[1]: session-24.scope: Deactivated successfully.
Feb 19 20:03:45 compute-0 systemd-logind[810]: Session 24 logged out. Waiting for processes to exit.
Feb 19 20:03:45 compute-0 systemd-logind[810]: Removed session 24.
Feb 19 20:03:46 compute-0 python3.9[181156]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:03:46 compute-0 python3.9[181232]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:03:47 compute-0 python3.9[181382]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:03:47 compute-0 python3.9[181503]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771531426.9974833-999-91687162886680/.source _original_basename=ssh-config follow=False checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:03:48 compute-0 python3.9[181653]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:03:49 compute-0 python3.9[181774]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771531428.1349883-999-77135961505787/.source.py _original_basename=nova_statedir_ownership.py follow=False checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:03:49 compute-0 python3.9[181924]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:03:50 compute-0 python3.9[182045]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771531429.2276762-999-167621493727810/.source _original_basename=run-on-host follow=False checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:03:51 compute-0 python3.9[182195]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:03:51 compute-0 python3.9[182316]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771531430.6550703-1053-194177712664449/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:03:52 compute-0 sudo[182466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glfbkvtxxnbqgreuheduliuzfsxbcqjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531432.1311488-1068-157919540760571/AnsiballZ_file.py'
Feb 19 20:03:52 compute-0 sudo[182466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:52 compute-0 python3.9[182469]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:03:52 compute-0 sudo[182466]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:53 compute-0 sudo[182619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hriyybutsnelrlqrheacdrhflubviznr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531432.7855873-1076-106521622804476/AnsiballZ_copy.py'
Feb 19 20:03:53 compute-0 sudo[182619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:53 compute-0 python3.9[182622]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:03:53 compute-0 sudo[182619]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:53 compute-0 sudo[182772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuhakeuewamvebszsyagjjfbgvnfpxwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531433.3827689-1084-245796296501850/AnsiballZ_stat.py'
Feb 19 20:03:53 compute-0 sudo[182772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:53 compute-0 python3.9[182775]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 20:03:53 compute-0 sudo[182772]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:54 compute-0 sudo[182925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgvybkiwxymkiuiebcwbnketedqqgqgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531434.065748-1092-280998768582804/AnsiballZ_stat.py'
Feb 19 20:03:54 compute-0 sudo[182925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:54 compute-0 python3.9[182928]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:03:54 compute-0 sudo[182925]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:54 compute-0 sudo[183049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnntvczmbnwlzvjmttfjqrwvpidwvdbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531434.065748-1092-280998768582804/AnsiballZ_copy.py'
Feb 19 20:03:54 compute-0 sudo[183049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:55 compute-0 python3.9[183052]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1771531434.065748-1092-280998768582804/.source _original_basename=.9h1s33qd follow=False checksum=2af43098e812dba045136a6bfcd2982cfab54525 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Feb 19 20:03:55 compute-0 sudo[183049]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:55 compute-0 python3.9[183204]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 20:03:56 compute-0 sudo[183356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgqtttvmvugxpktcabyrrostsenmdtpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531436.219819-1120-126153347303171/AnsiballZ_file.py'
Feb 19 20:03:56 compute-0 sudo[183356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:56 compute-0 python3.9[183359]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:03:56 compute-0 sudo[183356]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:57 compute-0 sudo[183509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-folqtojyuerlzvalbvrxmywyaffxcyrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531436.8611712-1128-125890949372262/AnsiballZ_file.py'
Feb 19 20:03:57 compute-0 sudo[183509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:03:57 compute-0 python3.9[183512]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:03:57 compute-0 sudo[183509]: pam_unix(sudo:session): session closed for user root
Feb 19 20:03:57 compute-0 python3.9[183662]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/nova_compute_init state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:04:00 compute-0 sudo[184083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fneqvfubpoxlfunoqiwskbvgneqsjpxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531439.5267403-1162-177956232753913/AnsiballZ_container_config_data.py'
Feb 19 20:04:00 compute-0 sudo[184083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:04:00 compute-0 python3.9[184086]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/nova_compute_init config_pattern=*.json debug=False
Feb 19 20:04:00 compute-0 sudo[184083]: pam_unix(sudo:session): session closed for user root
Feb 19 20:04:00 compute-0 sudo[184236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftsxehamgveolhryxmwpfenxtbsdyqke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531440.5704875-1173-222110655524610/AnsiballZ_container_config_hash.py'
Feb 19 20:04:00 compute-0 sudo[184236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:04:01 compute-0 python3.9[184239]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 19 20:04:01 compute-0 sudo[184236]: pam_unix(sudo:session): session closed for user root
Feb 19 20:04:01 compute-0 sudo[184389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzoajgkczlsjknhvluravlpqobhxbohl ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771531441.4705973-1183-204262970429232/AnsiballZ_edpm_container_manage.py'
Feb 19 20:04:01 compute-0 sudo[184389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:04:02 compute-0 python3[184392]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/nova_compute_init config_id=nova_compute_init config_overrides={} config_patterns=*.json containers=['nova_compute_init'] log_base_path=/var/log/containers/stdouts debug=False
Feb 19 20:04:02 compute-0 podman[184426]: 2026-02-19 20:04:02.362405443 +0000 UTC m=+0.040316651 container create 5c6092b16853ac7e760c3761de839ce33879eaf6a0a7a14222d9b351ef70591b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '2f802de9dd0efc4d8031d89c396d69745a32d3e1fa31b47a0ebc6301102ac31b'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, config_id=nova_compute_init, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 19 20:04:02 compute-0 podman[184426]: 2026-02-19 20:04:02.340747556 +0000 UTC m=+0.018658774 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Feb 19 20:04:02 compute-0 python3[184392]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --env EDPM_CONFIG_HASH=2f802de9dd0efc4d8031d89c396d69745a32d3e1fa31b47a0ebc6301102ac31b --label config_id=nova_compute_init --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '2f802de9dd0efc4d8031d89c396d69745a32d3e1fa31b47a0ebc6301102ac31b'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Feb 19 20:04:02 compute-0 sudo[184389]: pam_unix(sudo:session): session closed for user root
Feb 19 20:04:02 compute-0 podman[184465]: 2026-02-19 20:04:02.54036476 +0000 UTC m=+0.047250967 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent)
Feb 19 20:04:03 compute-0 sudo[184634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-darqmrupcmvcprpbvkqomcpbvymfzioq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531442.602944-1191-167545861873936/AnsiballZ_stat.py'
Feb 19 20:04:03 compute-0 sudo[184634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:04:03 compute-0 python3.9[184637]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 20:04:03 compute-0 sudo[184634]: pam_unix(sudo:session): session closed for user root
Feb 19 20:04:04 compute-0 python3.9[184789]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 19 20:04:04 compute-0 sudo[184939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mouxbttqnhsixbonfnsnjplidphqevaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531444.499944-1218-207258773608774/AnsiballZ_stat.py'
Feb 19 20:04:04 compute-0 sudo[184939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:04:04 compute-0 python3.9[184942]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:04:04 compute-0 sudo[184939]: pam_unix(sudo:session): session closed for user root
Feb 19 20:04:05 compute-0 sudo[185065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlgngmfahfdrbmqjvezmxgqwiofnpugi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531444.499944-1218-207258773608774/AnsiballZ_copy.py'
Feb 19 20:04:05 compute-0 sudo[185065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:04:05 compute-0 python3.9[185068]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771531444.499944-1218-207258773608774/.source.yaml _original_basename=.83rxxcdr follow=False checksum=9e412c9674a5ce108c64c0c4fbf7ca3efa7f1657 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:04:05 compute-0 sudo[185065]: pam_unix(sudo:session): session closed for user root
Feb 19 20:04:05 compute-0 sudo[185218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svuvlzvhehgntzfzpydiagwlzghzozsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531445.7369514-1235-252616523167344/AnsiballZ_file.py'
Feb 19 20:04:05 compute-0 sudo[185218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:04:06 compute-0 python3.9[185221]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:04:06 compute-0 sudo[185218]: pam_unix(sudo:session): session closed for user root
Feb 19 20:04:06 compute-0 sudo[185371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khefonaqxivqmkdwvxecncitdxhmxvxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531446.286667-1243-1672582522646/AnsiballZ_file.py'
Feb 19 20:04:06 compute-0 sudo[185371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:04:06 compute-0 python3.9[185374]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:04:06 compute-0 sudo[185371]: pam_unix(sudo:session): session closed for user root
Feb 19 20:04:07 compute-0 sudo[185524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhgmbfylhrsodbaoqyqwfjcctlgkjakn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531446.8320074-1251-63095376619012/AnsiballZ_stat.py'
Feb 19 20:04:07 compute-0 sudo[185524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:04:07 compute-0 python3.9[185527]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:04:07 compute-0 sudo[185524]: pam_unix(sudo:session): session closed for user root
Feb 19 20:04:07 compute-0 sudo[185648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-reyneqkzgwmivaarpfvmccvegsttinif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531446.8320074-1251-63095376619012/AnsiballZ_copy.py'
Feb 19 20:04:07 compute-0 sudo[185648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:04:07 compute-0 python3.9[185651]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/nova_compute.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1771531446.8320074-1251-63095376619012/.source.json _original_basename=.th1aghpa follow=False checksum=0018389a48392615f4a8869cad43008a907328ff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:04:07 compute-0 sudo[185648]: pam_unix(sudo:session): session closed for user root
Feb 19 20:04:08 compute-0 python3.9[185801]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/nova_compute state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:04:09 compute-0 sudo[186222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnlxlmdinxbnxxjoxyeqhdbogupfmnip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531449.6884446-1291-70107167266952/AnsiballZ_container_config_data.py'
Feb 19 20:04:09 compute-0 sudo[186222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:04:10 compute-0 python3.9[186225]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/nova_compute config_pattern=*.json debug=False
Feb 19 20:04:10 compute-0 sudo[186222]: pam_unix(sudo:session): session closed for user root
Feb 19 20:04:10 compute-0 sudo[186375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqdvthdjhmlxilavspmwzasoszdhzxga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531450.4249208-1302-24686430595803/AnsiballZ_container_config_hash.py'
Feb 19 20:04:10 compute-0 sudo[186375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:04:10 compute-0 python3.9[186378]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 19 20:04:10 compute-0 sudo[186375]: pam_unix(sudo:session): session closed for user root
Feb 19 20:04:11 compute-0 sudo[186528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkgkqxxekocnltxcptfwrerlmrlqjjli ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771531451.0883195-1312-82586852529461/AnsiballZ_edpm_container_manage.py'
Feb 19 20:04:11 compute-0 sudo[186528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:04:11 compute-0 python3[186531]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/nova_compute config_id=nova_compute config_overrides={} config_patterns=*.json containers=['nova_compute'] log_base_path=/var/log/containers/stdouts debug=False
Feb 19 20:04:11 compute-0 podman[186568]: 2026-02-19 20:04:11.730004563 +0000 UTC m=+0.049722364 container create 43efb863e131dc5f7c1f7aaf9924eccbcfb9cbb16f77da5a7fb002b6c8513c67 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855-2f802de9dd0efc4d8031d89c396d69745a32d3e1fa31b47a0ebc6301102ac31b'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, config_id=nova_compute, container_name=nova_compute, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Feb 19 20:04:11 compute-0 podman[186568]: 2026-02-19 20:04:11.70332329 +0000 UTC m=+0.023041171 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Feb 19 20:04:11 compute-0 python3[186531]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855-2f802de9dd0efc4d8031d89c396d69745a32d3e1fa31b47a0ebc6301102ac31b --label config_id=nova_compute --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855-2f802de9dd0efc4d8031d89c396d69745a32d3e1fa31b47a0ebc6301102ac31b'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Feb 19 20:04:11 compute-0 sudo[186528]: pam_unix(sudo:session): session closed for user root
Feb 19 20:04:12 compute-0 sudo[186757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yilgpmtwfszreqydffqqthhvzkhcuhdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531451.972005-1320-74627423696935/AnsiballZ_stat.py'
Feb 19 20:04:12 compute-0 sudo[186757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:04:12 compute-0 python3.9[186760]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 20:04:12 compute-0 sudo[186757]: pam_unix(sudo:session): session closed for user root
Feb 19 20:04:12 compute-0 sudo[186912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqbbbiogewuczqpobsfuiufxwwuihvfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531452.5830781-1329-190339604739536/AnsiballZ_file.py'
Feb 19 20:04:12 compute-0 sudo[186912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:04:13 compute-0 python3.9[186915]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:04:13 compute-0 sudo[186912]: pam_unix(sudo:session): session closed for user root
Feb 19 20:04:13 compute-0 sudo[186989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxouwolercyvntasgsqxjhyfcfcsaxqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531452.5830781-1329-190339604739536/AnsiballZ_stat.py'
Feb 19 20:04:13 compute-0 sudo[186989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:04:13 compute-0 python3.9[186992]: ansible-stat Invoked with path=/etc/systemd/system/edpm_nova_compute_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 20:04:13 compute-0 sudo[186989]: pam_unix(sudo:session): session closed for user root
Feb 19 20:04:13 compute-0 sudo[187141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfjllzxmwzngnessippuzvxhiesrvily ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531453.4427116-1329-38058503647710/AnsiballZ_copy.py'
Feb 19 20:04:13 compute-0 sudo[187141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:04:13 compute-0 python3.9[187144]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771531453.4427116-1329-38058503647710/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:04:13 compute-0 sudo[187141]: pam_unix(sudo:session): session closed for user root
Feb 19 20:04:14 compute-0 sudo[187218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlgmzbswzsflkjilffetvgiwyvnkivaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531453.4427116-1329-38058503647710/AnsiballZ_systemd.py'
Feb 19 20:04:14 compute-0 sudo[187218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:04:14 compute-0 python3.9[187221]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 19 20:04:14 compute-0 systemd[1]: Reloading.
Feb 19 20:04:14 compute-0 systemd-rc-local-generator[187250]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:04:14 compute-0 systemd-sysv-generator[187257]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:04:14 compute-0 sudo[187218]: pam_unix(sudo:session): session closed for user root
Feb 19 20:04:14 compute-0 sudo[187337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyodfibpntjvutmanxhvpsknleikpspe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531453.4427116-1329-38058503647710/AnsiballZ_systemd.py'
Feb 19 20:04:14 compute-0 sudo[187337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:04:15 compute-0 python3.9[187340]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 20:04:15 compute-0 systemd[1]: Reloading.
Feb 19 20:04:15 compute-0 systemd-sysv-generator[187375]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:04:15 compute-0 systemd-rc-local-generator[187370]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:04:15 compute-0 systemd[1]: Starting nova_compute container...
Feb 19 20:04:15 compute-0 systemd[1]: Started libcrun container.
Feb 19 20:04:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b8119c102e0d7f1e76dee1291790808ce17bfe48a33ea1fcf8b08f52d445e8/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Feb 19 20:04:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b8119c102e0d7f1e76dee1291790808ce17bfe48a33ea1fcf8b08f52d445e8/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Feb 19 20:04:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b8119c102e0d7f1e76dee1291790808ce17bfe48a33ea1fcf8b08f52d445e8/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb 19 20:04:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b8119c102e0d7f1e76dee1291790808ce17bfe48a33ea1fcf8b08f52d445e8/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 19 20:04:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b8119c102e0d7f1e76dee1291790808ce17bfe48a33ea1fcf8b08f52d445e8/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Feb 19 20:04:15 compute-0 podman[187386]: 2026-02-19 20:04:15.597243233 +0000 UTC m=+0.080333500 container init 43efb863e131dc5f7c1f7aaf9924eccbcfb9cbb16f77da5a7fb002b6c8513c67 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855-2f802de9dd0efc4d8031d89c396d69745a32d3e1fa31b47a0ebc6301102ac31b'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=nova_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 19 20:04:15 compute-0 podman[187386]: 2026-02-19 20:04:15.602895459 +0000 UTC m=+0.085985696 container start 43efb863e131dc5f7c1f7aaf9924eccbcfb9cbb16f77da5a7fb002b6c8513c67 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855-2f802de9dd0efc4d8031d89c396d69745a32d3e1fa31b47a0ebc6301102ac31b'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=nova_compute, container_name=nova_compute, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Feb 19 20:04:15 compute-0 podman[187386]: nova_compute
Feb 19 20:04:15 compute-0 nova_compute[187401]: + sudo -E kolla_set_configs
Feb 19 20:04:15 compute-0 systemd[1]: Started nova_compute container.
Feb 19 20:04:15 compute-0 sudo[187337]: pam_unix(sudo:session): session closed for user root
Feb 19 20:04:15 compute-0 nova_compute[187401]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 19 20:04:15 compute-0 nova_compute[187401]: INFO:__main__:Validating config file
Feb 19 20:04:15 compute-0 nova_compute[187401]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 19 20:04:15 compute-0 nova_compute[187401]: INFO:__main__:Copying service configuration files
Feb 19 20:04:15 compute-0 nova_compute[187401]: INFO:__main__:Deleting /etc/nova/nova.conf
Feb 19 20:04:15 compute-0 nova_compute[187401]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf
Feb 19 20:04:15 compute-0 nova_compute[187401]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Feb 19 20:04:15 compute-0 nova_compute[187401]: INFO:__main__:Copying /var/lib/kolla/config_files/src/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Feb 19 20:04:15 compute-0 nova_compute[187401]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Feb 19 20:04:15 compute-0 nova_compute[187401]: INFO:__main__:Copying /var/lib/kolla/config_files/src/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 19 20:04:15 compute-0 nova_compute[187401]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 19 20:04:15 compute-0 nova_compute[187401]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Feb 19 20:04:15 compute-0 nova_compute[187401]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Feb 19 20:04:15 compute-0 nova_compute[187401]: INFO:__main__:Copying /var/lib/kolla/config_files/src/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 19 20:04:15 compute-0 nova_compute[187401]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 19 20:04:15 compute-0 nova_compute[187401]: INFO:__main__:Deleting /etc/ceph
Feb 19 20:04:15 compute-0 nova_compute[187401]: INFO:__main__:Creating directory /etc/ceph
Feb 19 20:04:15 compute-0 nova_compute[187401]: INFO:__main__:Setting permission for /etc/ceph
Feb 19 20:04:15 compute-0 nova_compute[187401]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Feb 19 20:04:15 compute-0 nova_compute[187401]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb 19 20:04:15 compute-0 nova_compute[187401]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-config to /var/lib/nova/.ssh/config
Feb 19 20:04:15 compute-0 nova_compute[187401]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb 19 20:04:15 compute-0 nova_compute[187401]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Feb 19 20:04:15 compute-0 nova_compute[187401]: INFO:__main__:Copying /var/lib/kolla/config_files/src/run-on-host to /usr/sbin/iscsiadm
Feb 19 20:04:15 compute-0 nova_compute[187401]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Feb 19 20:04:15 compute-0 nova_compute[187401]: INFO:__main__:Writing out command to execute
Feb 19 20:04:15 compute-0 nova_compute[187401]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Feb 19 20:04:15 compute-0 nova_compute[187401]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb 19 20:04:15 compute-0 nova_compute[187401]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb 19 20:04:15 compute-0 nova_compute[187401]: ++ cat /run_command
Feb 19 20:04:15 compute-0 nova_compute[187401]: + CMD=nova-compute
Feb 19 20:04:15 compute-0 nova_compute[187401]: + ARGS=
Feb 19 20:04:15 compute-0 nova_compute[187401]: + sudo kolla_copy_cacerts
Feb 19 20:04:15 compute-0 nova_compute[187401]: + [[ ! -n '' ]]
Feb 19 20:04:15 compute-0 nova_compute[187401]: + . kolla_extend_start
Feb 19 20:04:15 compute-0 nova_compute[187401]: Running command: 'nova-compute'
Feb 19 20:04:15 compute-0 nova_compute[187401]: + echo 'Running command: '\''nova-compute'\'''
Feb 19 20:04:15 compute-0 nova_compute[187401]: + umask 0022
Feb 19 20:04:15 compute-0 nova_compute[187401]: + exec nova-compute
Feb 19 20:04:16 compute-0 podman[187536]: 2026-02-19 20:04:16.245388314 +0000 UTC m=+0.151494512 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 19 20:04:16 compute-0 python3.9[187569]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 19 20:04:16 compute-0 sudo[187740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngtainecmbgicvvlzcjdjcvialeszowz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531456.7463186-1374-91986403567639/AnsiballZ_stat.py'
Feb 19 20:04:16 compute-0 sudo[187740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:04:17 compute-0 python3.9[187743]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:04:17 compute-0 sudo[187740]: pam_unix(sudo:session): session closed for user root
Feb 19 20:04:17 compute-0 sudo[187867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xabaawussmyjmwfcoytuozayzejknwdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531456.7463186-1374-91986403567639/AnsiballZ_copy.py'
Feb 19 20:04:17 compute-0 sudo[187867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:04:17 compute-0 nova_compute[187401]: 2026-02-19 20:04:17.523 187405 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 19 20:04:17 compute-0 nova_compute[187401]: 2026-02-19 20:04:17.523 187405 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 19 20:04:17 compute-0 nova_compute[187401]: 2026-02-19 20:04:17.523 187405 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 19 20:04:17 compute-0 nova_compute[187401]: 2026-02-19 20:04:17.523 187405 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Feb 19 20:04:17 compute-0 python3.9[187871]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771531456.7463186-1374-91986403567639/.source.yaml _original_basename=.rpw57i6u follow=False checksum=d67174683655f04f119027c1ef265b9fcb3efb66 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:04:17 compute-0 sudo[187867]: pam_unix(sudo:session): session closed for user root
Feb 19 20:04:17 compute-0 nova_compute[187401]: 2026-02-19 20:04:17.650 187405 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:04:17 compute-0 nova_compute[187401]: 2026-02-19 20:04:17.660 187405 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:04:17 compute-0 nova_compute[187401]: 2026-02-19 20:04:17.660 187405 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Feb 19 20:04:18 compute-0 python3.9[188023]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.631 187405 INFO nova.virt.driver [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.736 187405 INFO nova.compute.provider_config [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.839 187405 DEBUG oslo_concurrency.lockutils [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.839 187405 DEBUG oslo_concurrency.lockutils [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.840 187405 DEBUG oslo_concurrency.lockutils [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.840 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.840 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.840 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.841 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.841 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.841 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.841 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.841 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.842 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.842 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.842 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.842 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.842 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.842 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.843 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.843 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.843 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.843 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.843 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.844 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.844 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.844 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.844 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.844 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.844 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.845 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.845 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.845 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.845 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.845 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.845 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.845 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.846 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.846 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.846 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.846 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.846 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.846 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.847 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.847 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.847 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.847 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.847 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.847 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.848 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.848 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.848 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.848 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.848 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.848 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.849 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.849 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.849 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.849 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.849 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.849 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.850 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.850 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.850 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.850 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.850 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.850 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.850 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.851 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.851 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.851 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.851 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.851 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.851 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.851 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.852 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.852 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.852 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.852 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.852 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.852 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.853 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.853 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.853 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.853 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.853 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.854 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.854 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.854 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.854 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.854 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.854 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.855 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.855 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.855 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.855 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.855 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.855 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.856 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.856 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.856 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.856 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.856 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.856 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.857 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.857 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.857 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.857 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.857 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.857 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.857 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.858 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.858 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.858 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.858 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.858 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.858 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.858 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.859 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.859 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.859 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.859 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.859 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.859 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.859 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.860 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.860 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.860 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.860 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.860 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.860 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.860 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.861 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.861 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.861 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.861 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.861 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.861 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.861 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.862 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.862 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.862 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.862 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.862 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.862 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.862 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.862 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.863 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.863 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.863 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.863 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.863 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.863 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.863 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.864 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.864 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.864 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.864 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.864 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.864 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.865 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.865 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.865 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.865 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.865 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.865 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.866 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.866 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.866 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.866 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.866 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.866 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.866 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.867 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.867 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.867 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.867 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.867 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.868 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.868 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.868 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.868 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.868 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.869 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.869 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.869 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.869 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.869 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.870 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.870 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.870 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.870 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.870 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.871 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.871 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.871 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.871 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.871 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.871 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.872 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.872 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.872 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.872 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.872 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.872 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.872 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.873 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.873 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.873 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.873 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.873 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.873 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.873 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.874 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.874 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.874 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.874 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.874 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.874 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.875 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.875 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.875 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.875 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.875 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.875 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.875 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.876 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.876 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cinder.os_region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.876 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.876 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.876 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.876 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.876 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.877 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.877 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.877 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.877 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.877 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.877 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.877 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.878 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.878 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.878 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.878 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.878 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.879 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.879 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.879 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.879 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.879 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.879 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.880 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.880 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.880 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.880 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.880 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.880 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.880 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.880 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.881 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.881 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.881 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.881 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.881 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.881 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.881 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.882 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.882 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.882 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.882 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.882 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.882 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.883 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.883 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.883 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.883 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.883 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.883 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.883 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.883 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.884 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.884 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.884 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.884 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.884 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.884 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.885 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.885 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.885 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.885 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.885 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.885 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.885 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.886 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.886 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.886 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.886 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.886 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.886 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.886 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.887 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.887 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.887 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.887 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.887 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.887 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.887 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.887 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.888 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.888 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.888 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.888 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.888 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.888 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.888 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.889 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.889 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.889 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.889 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.889 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.889 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.889 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.890 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.890 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.890 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.890 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.890 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.890 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.890 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.891 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.891 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.891 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.891 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.891 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.891 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.891 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.892 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.892 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.892 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.892 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.892 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.892 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.892 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.893 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.893 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.893 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.893 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.893 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.893 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.894 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.894 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.894 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.894 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.894 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.894 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.895 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.895 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.895 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.895 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.895 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.895 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.896 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.896 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.896 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.896 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.896 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.896 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.896 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.897 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.897 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.897 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.897 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.897 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.897 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.897 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.898 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.898 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.898 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.898 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.898 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.898 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.899 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.899 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.899 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.899 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.899 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.899 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.899 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.899 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.900 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.900 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.900 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.900 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.900 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.901 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.901 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.901 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.901 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.901 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.901 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] barbican.barbican_region_name  = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.901 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.902 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.902 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.902 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.902 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.902 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.902 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.902 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.903 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.903 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.903 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.903 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.903 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.903 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.904 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.904 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.904 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.904 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.904 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.904 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.904 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.905 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.905 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.905 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.905 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.905 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.905 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.905 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.906 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.906 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.906 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.906 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.906 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.906 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.906 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.907 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.907 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.907 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.907 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.907 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.907 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.907 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.907 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.908 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.908 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.908 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.908 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.908 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.908 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.908 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.909 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.909 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.909 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.909 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.909 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.909 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.910 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.910 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.910 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.910 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.910 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.911 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.911 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.911 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.911 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.911 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.911 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.911 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.912 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.912 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.912 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.912 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.912 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.912 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.912 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.913 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.913 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.913 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.913 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.913 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.913 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.914 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.914 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.914 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.914 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.914 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.914 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.914 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.915 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.915 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.915 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.915 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.915 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.915 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.915 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.916 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.916 187405 WARNING oslo_config.cfg [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Feb 19 20:04:18 compute-0 nova_compute[187401]: live_migration_uri is deprecated for removal in favor of two other options that
Feb 19 20:04:18 compute-0 nova_compute[187401]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Feb 19 20:04:18 compute-0 nova_compute[187401]: and ``live_migration_inbound_addr`` respectively.
Feb 19 20:04:18 compute-0 nova_compute[187401]: ).  Its value may be silently ignored in the future.
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.916 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.916 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.916 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.917 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.917 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.917 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.917 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.917 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.918 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.918 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.918 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.918 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.918 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.918 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.919 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.919 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.919 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.919 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.919 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.919 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.919 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.920 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.920 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.920 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.920 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.920 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.921 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.921 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.921 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.921 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.921 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.922 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.922 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.922 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.922 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.922 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.923 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.923 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.923 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.923 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.923 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.924 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.924 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.924 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.924 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.924 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.924 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.925 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.925 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.925 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.925 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.926 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.926 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.926 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.926 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.926 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.926 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.927 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.927 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.927 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.927 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.927 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.928 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.928 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.928 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.928 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.928 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.928 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.928 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.929 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.929 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.929 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.929 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.929 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.929 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.930 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.930 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.930 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.930 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.930 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.930 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.930 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.931 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.931 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.931 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.931 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.931 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.931 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.931 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.932 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.932 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.932 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.932 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.932 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.932 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.932 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.932 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.933 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.933 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.933 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.933 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.933 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.933 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.933 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.934 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.934 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.934 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.934 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.934 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.934 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.934 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.935 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.935 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.935 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.935 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.935 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.935 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.935 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.936 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.936 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.936 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.936 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.936 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.936 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.936 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.937 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.937 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.937 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.937 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.937 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.937 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.937 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.938 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.938 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.938 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.938 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.938 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.938 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.938 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.939 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.939 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.939 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.939 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.939 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.939 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.940 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.940 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.940 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.940 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.940 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.940 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.940 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.940 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.941 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.941 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.941 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.941 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.941 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.941 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.942 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.942 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.942 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.942 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.942 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.942 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.942 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.943 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.943 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.943 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.943 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.943 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.943 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.943 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.943 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.944 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.944 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.944 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.944 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.944 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.945 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.945 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.945 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.945 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.945 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.945 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.946 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.946 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.949 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.949 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.949 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.949 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.950 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.950 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.950 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.950 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.951 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.951 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.951 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.951 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.952 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.952 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.952 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.952 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.952 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.953 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.953 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.953 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.953 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.953 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.954 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.954 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.954 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.954 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.955 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.955 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.955 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.955 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.956 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.956 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.956 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.956 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.956 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.957 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.957 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.957 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.957 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.957 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.957 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.958 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.958 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.958 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.958 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.958 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.958 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.959 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.959 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.959 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.959 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.959 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.960 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.960 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.960 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.960 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.960 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.960 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.961 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.961 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.961 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.961 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.961 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.962 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.962 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.962 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.962 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.962 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.962 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.962 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.963 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.963 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.963 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.963 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.963 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.963 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.964 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.964 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.964 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.964 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.964 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.965 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.965 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.965 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.965 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.965 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.965 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.965 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.965 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.966 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.966 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.966 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.966 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.966 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.966 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.967 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.967 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.967 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.967 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.967 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.967 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.967 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.968 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.968 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.968 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.968 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.968 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.969 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.969 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.969 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.969 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.969 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.969 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.969 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.970 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.970 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.970 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.970 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.970 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.971 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.971 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.971 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.971 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.971 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.972 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.972 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.972 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.972 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.972 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.972 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.973 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.973 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.973 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.973 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.973 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.974 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.974 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.974 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.974 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.974 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.975 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.975 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.975 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.975 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.975 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.975 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.976 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.976 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.976 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.976 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.976 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.976 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.977 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.977 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.977 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.977 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.977 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.978 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.978 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.978 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.978 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.978 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.978 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.979 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.979 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.979 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.979 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.979 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.979 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.980 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.980 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.980 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.980 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.980 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.980 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.981 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.981 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.981 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.981 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.981 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.981 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.982 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.982 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.982 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.982 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.982 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.982 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.983 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.983 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.983 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.983 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.983 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.984 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.984 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.984 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.984 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.984 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.984 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.985 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.985 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.985 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.985 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.985 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.985 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.986 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.986 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.986 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.986 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.986 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.986 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.987 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.987 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.987 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.987 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.987 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.987 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.988 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.988 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.988 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.988 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.988 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.989 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.989 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.989 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.989 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.989 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.989 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.990 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.990 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.990 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.990 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.990 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.990 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.991 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.991 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.991 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.991 187405 DEBUG oslo_service.service [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Feb 19 20:04:18 compute-0 nova_compute[187401]: 2026-02-19 20:04:18.992 187405 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)
Feb 19 20:04:19 compute-0 nova_compute[187401]: 2026-02-19 20:04:19.040 187405 DEBUG nova.virt.libvirt.host [None req-524b8128-a239-47ad-bbfe-7717ad2e776d - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Feb 19 20:04:19 compute-0 nova_compute[187401]: 2026-02-19 20:04:19.041 187405 DEBUG nova.virt.libvirt.host [None req-524b8128-a239-47ad-bbfe-7717ad2e776d - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Feb 19 20:04:19 compute-0 nova_compute[187401]: 2026-02-19 20:04:19.041 187405 DEBUG nova.virt.libvirt.host [None req-524b8128-a239-47ad-bbfe-7717ad2e776d - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Feb 19 20:04:19 compute-0 nova_compute[187401]: 2026-02-19 20:04:19.041 187405 DEBUG nova.virt.libvirt.host [None req-524b8128-a239-47ad-bbfe-7717ad2e776d - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Feb 19 20:04:19 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Feb 19 20:04:19 compute-0 python3.9[188173]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 20:04:19 compute-0 systemd[1]: Started libvirt QEMU daemon.
Feb 19 20:04:19 compute-0 nova_compute[187401]: 2026-02-19 20:04:19.096 187405 DEBUG nova.virt.libvirt.host [None req-524b8128-a239-47ad-bbfe-7717ad2e776d - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f24453b09d0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Feb 19 20:04:19 compute-0 nova_compute[187401]: 2026-02-19 20:04:19.099 187405 DEBUG nova.virt.libvirt.host [None req-524b8128-a239-47ad-bbfe-7717ad2e776d - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f24453b09d0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Feb 19 20:04:19 compute-0 nova_compute[187401]: 2026-02-19 20:04:19.099 187405 INFO nova.virt.libvirt.driver [None req-524b8128-a239-47ad-bbfe-7717ad2e776d - - - - - -] Connection event '1' reason 'None'
Feb 19 20:04:19 compute-0 nova_compute[187401]: 2026-02-19 20:04:19.223 187405 WARNING nova.virt.libvirt.driver [None req-524b8128-a239-47ad-bbfe-7717ad2e776d - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Feb 19 20:04:19 compute-0 nova_compute[187401]: 2026-02-19 20:04:19.223 187405 DEBUG nova.virt.libvirt.volume.mount [None req-524b8128-a239-47ad-bbfe-7717ad2e776d - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Feb 19 20:04:19 compute-0 python3.9[188375]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 20:04:19 compute-0 nova_compute[187401]: 2026-02-19 20:04:19.831 187405 INFO nova.virt.libvirt.host [None req-524b8128-a239-47ad-bbfe-7717ad2e776d - - - - - -] Libvirt host capabilities <capabilities>
Feb 19 20:04:19 compute-0 nova_compute[187401]: 
Feb 19 20:04:19 compute-0 nova_compute[187401]:   <host>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <uuid>ac1ff264-2c2d-4373-8723-bb8a73a49955</uuid>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <cpu>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <arch>x86_64</arch>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model>EPYC-Rome-v4</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <vendor>AMD</vendor>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <microcode version='16777317'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <signature family='23' model='49' stepping='0'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <maxphysaddr mode='emulate' bits='40'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature name='x2apic'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature name='tsc-deadline'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature name='osxsave'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature name='hypervisor'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature name='tsc_adjust'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature name='spec-ctrl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature name='stibp'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature name='arch-capabilities'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature name='ssbd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature name='cmp_legacy'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature name='topoext'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature name='virt-ssbd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature name='lbrv'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature name='tsc-scale'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature name='vmcb-clean'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature name='pause-filter'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature name='pfthreshold'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature name='svme-addr-chk'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature name='rdctl-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature name='skip-l1dfl-vmentry'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature name='mds-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature name='pschange-mc-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <pages unit='KiB' size='4'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <pages unit='KiB' size='2048'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <pages unit='KiB' size='1048576'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     </cpu>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <power_management>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <suspend_mem/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <suspend_disk/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <suspend_hybrid/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     </power_management>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <iommu support='no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <migration_features>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <live/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <uri_transports>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <uri_transport>tcp</uri_transport>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <uri_transport>rdma</uri_transport>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </uri_transports>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     </migration_features>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <topology>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <cells num='1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <cell id='0'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:           <memory unit='KiB'>7864272</memory>
Feb 19 20:04:19 compute-0 nova_compute[187401]:           <pages unit='KiB' size='4'>1966068</pages>
Feb 19 20:04:19 compute-0 nova_compute[187401]:           <pages unit='KiB' size='2048'>0</pages>
Feb 19 20:04:19 compute-0 nova_compute[187401]:           <pages unit='KiB' size='1048576'>0</pages>
Feb 19 20:04:19 compute-0 nova_compute[187401]:           <distances>
Feb 19 20:04:19 compute-0 nova_compute[187401]:             <sibling id='0' value='10'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:           </distances>
Feb 19 20:04:19 compute-0 nova_compute[187401]:           <cpus num='8'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:           </cpus>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         </cell>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </cells>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     </topology>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <cache>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     </cache>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <secmodel>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model>selinux</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <doi>0</doi>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     </secmodel>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <secmodel>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model>dac</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <doi>0</doi>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <baselabel type='kvm'>+107:+107</baselabel>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <baselabel type='qemu'>+107:+107</baselabel>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     </secmodel>
Feb 19 20:04:19 compute-0 nova_compute[187401]:   </host>
Feb 19 20:04:19 compute-0 nova_compute[187401]: 
Feb 19 20:04:19 compute-0 nova_compute[187401]:   <guest>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <os_type>hvm</os_type>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <arch name='i686'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <wordsize>32</wordsize>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <domain type='qemu'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <domain type='kvm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     </arch>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <features>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <pae/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <nonpae/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <acpi default='on' toggle='yes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <apic default='on' toggle='no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <cpuselection/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <deviceboot/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <disksnapshot default='on' toggle='no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <externalSnapshot/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     </features>
Feb 19 20:04:19 compute-0 nova_compute[187401]:   </guest>
Feb 19 20:04:19 compute-0 nova_compute[187401]: 
Feb 19 20:04:19 compute-0 nova_compute[187401]:   <guest>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <os_type>hvm</os_type>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <arch name='x86_64'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <wordsize>64</wordsize>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <domain type='qemu'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <domain type='kvm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     </arch>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <features>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <acpi default='on' toggle='yes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <apic default='on' toggle='no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <cpuselection/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <deviceboot/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <disksnapshot default='on' toggle='no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <externalSnapshot/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     </features>
Feb 19 20:04:19 compute-0 nova_compute[187401]:   </guest>
Feb 19 20:04:19 compute-0 nova_compute[187401]: 
Feb 19 20:04:19 compute-0 nova_compute[187401]: </capabilities>
Feb 19 20:04:19 compute-0 nova_compute[187401]: 
Feb 19 20:04:19 compute-0 nova_compute[187401]: 2026-02-19 20:04:19.839 187405 DEBUG nova.virt.libvirt.host [None req-524b8128-a239-47ad-bbfe-7717ad2e776d - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Feb 19 20:04:19 compute-0 nova_compute[187401]: 2026-02-19 20:04:19.857 187405 DEBUG nova.virt.libvirt.host [None req-524b8128-a239-47ad-bbfe-7717ad2e776d - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Feb 19 20:04:19 compute-0 nova_compute[187401]: <domainCapabilities>
Feb 19 20:04:19 compute-0 nova_compute[187401]:   <path>/usr/libexec/qemu-kvm</path>
Feb 19 20:04:19 compute-0 nova_compute[187401]:   <domain>kvm</domain>
Feb 19 20:04:19 compute-0 nova_compute[187401]:   <machine>pc-q35-rhel9.8.0</machine>
Feb 19 20:04:19 compute-0 nova_compute[187401]:   <arch>i686</arch>
Feb 19 20:04:19 compute-0 nova_compute[187401]:   <vcpu max='4096'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:   <iothreads supported='yes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:   <os supported='yes'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <enum name='firmware'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <loader supported='yes'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <enum name='type'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>rom</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>pflash</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <enum name='readonly'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>yes</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>no</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <enum name='secure'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>no</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     </loader>
Feb 19 20:04:19 compute-0 nova_compute[187401]:   </os>
Feb 19 20:04:19 compute-0 nova_compute[187401]:   <cpu>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <mode name='host-passthrough' supported='yes'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <enum name='hostPassthroughMigratable'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>on</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>off</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     </mode>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <mode name='maximum' supported='yes'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <enum name='maximumMigratable'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>on</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>off</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     </mode>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <mode name='host-model' supported='yes'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <vendor>AMD</vendor>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='require' name='x2apic'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='require' name='tsc-deadline'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='require' name='hypervisor'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='require' name='tsc_adjust'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='require' name='spec-ctrl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='require' name='stibp'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='require' name='ssbd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='require' name='cmp_legacy'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='require' name='overflow-recov'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='require' name='succor'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='require' name='ibrs'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='require' name='amd-ssbd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='require' name='virt-ssbd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='require' name='lbrv'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='require' name='tsc-scale'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='require' name='vmcb-clean'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='require' name='flushbyasid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='require' name='pause-filter'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='require' name='pfthreshold'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='require' name='svme-addr-chk'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='disable' name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     </mode>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <mode name='custom' supported='yes'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Broadwell'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Broadwell-IBRS'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Broadwell-noTSX'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Broadwell-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Broadwell-v2'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Broadwell-v3'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Broadwell-v4'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Cascadelake-Server'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Cascadelake-Server-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Cascadelake-Server-v2'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Cascadelake-Server-v3'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Cascadelake-Server-v4'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Cascadelake-Server-v5'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='ClearwaterForest'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni-int16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='bhi-ctrl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='bhi-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='cmpccxadd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ddpd-u'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='intel-psfd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ipred-ctrl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='lam'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='mcdt-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pbrsb-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='prefetchiti'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rrsba-ctrl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='sha512'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='sm3'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='sm4'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='ClearwaterForest-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni-int16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='bhi-ctrl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='bhi-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='cmpccxadd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ddpd-u'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='intel-psfd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ipred-ctrl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='lam'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='mcdt-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pbrsb-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='prefetchiti'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rrsba-ctrl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='sha512'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='sm3'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='sm4'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Cooperlake'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Cooperlake-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Cooperlake-v2'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Denverton'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='mpx'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Denverton-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='mpx'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Denverton-v2'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Denverton-v3'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Dhyana-v2'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='EPYC-Genoa'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amd-psfd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='auto-ibrs'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='stibp-always-on'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='EPYC-Genoa-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amd-psfd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='auto-ibrs'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='stibp-always-on'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='EPYC-Genoa-v2'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amd-psfd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='auto-ibrs'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fs-gs-base-ns'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='perfmon-v2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='stibp-always-on'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='EPYC-Milan'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='EPYC-Milan-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='EPYC-Milan-v2'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amd-psfd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='stibp-always-on'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='EPYC-Milan-v3'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amd-psfd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='stibp-always-on'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='EPYC-Rome'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='EPYC-Rome-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='EPYC-Rome-v2'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='EPYC-Rome-v3'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='EPYC-Turin'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amd-psfd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='auto-ibrs'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vp2intersect'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fs-gs-base-ns'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibpb-brtype'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='perfmon-v2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='prefetchi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='sbpb'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='srso-user-kernel-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='stibp-always-on'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='EPYC-Turin-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amd-psfd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='auto-ibrs'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vp2intersect'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fs-gs-base-ns'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibpb-brtype'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='perfmon-v2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='prefetchi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='sbpb'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='srso-user-kernel-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='stibp-always-on'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='EPYC-v3'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='EPYC-v4'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='EPYC-v5'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='GraniteRapids'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-fp16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-int8'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-tile'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-fp16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrc'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fzrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='mcdt-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pbrsb-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='prefetchiti'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xfd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='GraniteRapids-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-fp16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-int8'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-tile'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-fp16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrc'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fzrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='mcdt-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pbrsb-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='prefetchiti'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xfd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='GraniteRapids-v2'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-fp16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-int8'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-tile'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx10'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx10-128'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx10-256'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx10-512'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-fp16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrc'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fzrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='mcdt-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pbrsb-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='prefetchiti'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xfd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='GraniteRapids-v3'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-fp16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-int8'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-tile'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx10'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx10-128'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx10-256'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx10-512'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-fp16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrc'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fzrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='mcdt-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pbrsb-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='prefetchiti'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xfd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Haswell'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Haswell-IBRS'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Haswell-noTSX'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Haswell-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Haswell-v2'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Haswell-v3'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Haswell-v4'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Icelake-Server'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Icelake-Server-noTSX'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Icelake-Server-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Icelake-Server-v2'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Icelake-Server-v3'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Icelake-Server-v4'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Icelake-Server-v5'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Icelake-Server-v6'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Icelake-Server-v7'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='IvyBridge'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='IvyBridge-IBRS'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='IvyBridge-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='IvyBridge-v2'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='KnightsMill'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-4fmaps'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-4vnniw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512er'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512pf'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='KnightsMill-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-4fmaps'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-4vnniw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512er'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512pf'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Opteron_G4'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fma4'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xop'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Opteron_G4-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fma4'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xop'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Opteron_G5'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fma4'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='tbm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xop'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Opteron_G5-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fma4'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='tbm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xop'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='SapphireRapids'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-int8'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-tile'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-fp16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrc'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fzrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xfd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='SapphireRapids-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-int8'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-tile'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-fp16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrc'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fzrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xfd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='SapphireRapids-v2'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-int8'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-tile'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-fp16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrc'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fzrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xfd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='SapphireRapids-v3'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-int8'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-tile'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-fp16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrc'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fzrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xfd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='SapphireRapids-v4'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-int8'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-tile'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-fp16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrc'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fzrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xfd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='SierraForest'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='cmpccxadd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='mcdt-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pbrsb-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='SierraForest-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='cmpccxadd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='mcdt-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pbrsb-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='SierraForest-v2'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='bhi-ctrl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='cmpccxadd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='intel-psfd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ipred-ctrl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='lam'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='mcdt-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pbrsb-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rrsba-ctrl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='SierraForest-v3'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='bhi-ctrl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='cmpccxadd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='intel-psfd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ipred-ctrl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='lam'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='mcdt-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pbrsb-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rrsba-ctrl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Skylake-Client'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Skylake-Client-IBRS'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Skylake-Client-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Skylake-Client-v2'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Skylake-Client-v3'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Skylake-Client-v4'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Skylake-Server'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Skylake-Server-IBRS'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Skylake-Server-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Skylake-Server-v2'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Skylake-Server-v3'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Skylake-Server-v4'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Skylake-Server-v5'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Snowridge'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='core-capability'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='mpx'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='split-lock-detect'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Snowridge-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='core-capability'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='mpx'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='split-lock-detect'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Snowridge-v2'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='core-capability'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='split-lock-detect'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Snowridge-v3'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='core-capability'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='split-lock-detect'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Snowridge-v4'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='athlon'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='3dnow'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='3dnowext'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='athlon-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='3dnow'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='3dnowext'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='core2duo'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='core2duo-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='coreduo'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='coreduo-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='n270'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='n270-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='phenom'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='3dnow'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='3dnowext'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='phenom-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='3dnow'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='3dnowext'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     </mode>
Feb 19 20:04:19 compute-0 nova_compute[187401]:   </cpu>
Feb 19 20:04:19 compute-0 nova_compute[187401]:   <memoryBacking supported='yes'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <enum name='sourceType'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <value>file</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <value>anonymous</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <value>memfd</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     </enum>
Feb 19 20:04:19 compute-0 nova_compute[187401]:   </memoryBacking>
Feb 19 20:04:19 compute-0 nova_compute[187401]:   <devices>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <disk supported='yes'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <enum name='diskDevice'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>disk</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>cdrom</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>floppy</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>lun</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <enum name='bus'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>fdc</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>scsi</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>virtio</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>usb</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>sata</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <enum name='model'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>virtio</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>virtio-transitional</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>virtio-non-transitional</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     </disk>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <graphics supported='yes'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <enum name='type'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>vnc</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>egl-headless</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>dbus</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     </graphics>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <video supported='yes'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <enum name='modelType'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>vga</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>cirrus</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>virtio</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>none</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>bochs</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>ramfb</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     </video>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <hostdev supported='yes'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <enum name='mode'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>subsystem</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <enum name='startupPolicy'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>default</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>mandatory</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>requisite</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>optional</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <enum name='subsysType'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>usb</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>pci</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>scsi</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <enum name='capsType'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <enum name='pciBackend'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     </hostdev>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <rng supported='yes'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <enum name='model'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>virtio</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>virtio-transitional</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>virtio-non-transitional</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <enum name='backendModel'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>random</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>egd</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>builtin</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     </rng>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <filesystem supported='yes'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <enum name='driverType'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>path</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>handle</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>virtiofs</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     </filesystem>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <tpm supported='yes'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <enum name='model'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>tpm-tis</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>tpm-crb</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <enum name='backendModel'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>emulator</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>external</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <enum name='backendVersion'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>2.0</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     </tpm>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <redirdev supported='yes'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <enum name='bus'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>usb</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     </redirdev>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <channel supported='yes'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <enum name='type'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>pty</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>unix</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     </channel>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <crypto supported='yes'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <enum name='model'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <enum name='type'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>qemu</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <enum name='backendModel'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>builtin</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     </crypto>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <interface supported='yes'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <enum name='backendType'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>default</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>passt</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     </interface>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <panic supported='yes'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <enum name='model'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>isa</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>hyperv</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     </panic>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <console supported='yes'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <enum name='type'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>null</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>vc</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>pty</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>dev</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>file</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>pipe</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>stdio</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>udp</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>tcp</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>unix</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>qemu-vdagent</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>dbus</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     </console>
Feb 19 20:04:19 compute-0 nova_compute[187401]:   </devices>
Feb 19 20:04:19 compute-0 nova_compute[187401]:   <features>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <gic supported='no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <vmcoreinfo supported='yes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <genid supported='yes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <backingStoreInput supported='yes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <backup supported='yes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <async-teardown supported='yes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <s390-pv supported='no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <ps2 supported='yes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <tdx supported='no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <sev supported='no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <sgx supported='no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <hyperv supported='yes'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <enum name='features'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>relaxed</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>vapic</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>spinlocks</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>vpindex</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>runtime</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>synic</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>stimer</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>reset</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>vendor_id</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>frequencies</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>reenlightenment</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>tlbflush</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>ipi</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>avic</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>emsr_bitmap</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>xmm_input</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <defaults>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <spinlocks>4095</spinlocks>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <stimer_direct>on</stimer_direct>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <tlbflush_direct>on</tlbflush_direct>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <tlbflush_extended>on</tlbflush_extended>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </defaults>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     </hyperv>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <launchSecurity supported='no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:   </features>
Feb 19 20:04:19 compute-0 nova_compute[187401]: </domainCapabilities>
Feb 19 20:04:19 compute-0 nova_compute[187401]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 19 20:04:19 compute-0 nova_compute[187401]: 2026-02-19 20:04:19.865 187405 DEBUG nova.virt.libvirt.host [None req-524b8128-a239-47ad-bbfe-7717ad2e776d - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Feb 19 20:04:19 compute-0 nova_compute[187401]: <domainCapabilities>
Feb 19 20:04:19 compute-0 nova_compute[187401]:   <path>/usr/libexec/qemu-kvm</path>
Feb 19 20:04:19 compute-0 nova_compute[187401]:   <domain>kvm</domain>
Feb 19 20:04:19 compute-0 nova_compute[187401]:   <machine>pc-i440fx-rhel7.6.0</machine>
Feb 19 20:04:19 compute-0 nova_compute[187401]:   <arch>i686</arch>
Feb 19 20:04:19 compute-0 nova_compute[187401]:   <vcpu max='240'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:   <iothreads supported='yes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:   <os supported='yes'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <enum name='firmware'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <loader supported='yes'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <enum name='type'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>rom</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>pflash</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <enum name='readonly'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>yes</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>no</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <enum name='secure'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>no</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     </loader>
Feb 19 20:04:19 compute-0 nova_compute[187401]:   </os>
Feb 19 20:04:19 compute-0 nova_compute[187401]:   <cpu>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <mode name='host-passthrough' supported='yes'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <enum name='hostPassthroughMigratable'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>on</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>off</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     </mode>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <mode name='maximum' supported='yes'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <enum name='maximumMigratable'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>on</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <value>off</value>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     </mode>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <mode name='host-model' supported='yes'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <vendor>AMD</vendor>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='require' name='x2apic'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='require' name='tsc-deadline'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='require' name='hypervisor'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='require' name='tsc_adjust'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='require' name='spec-ctrl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='require' name='stibp'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='require' name='ssbd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='require' name='cmp_legacy'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='require' name='overflow-recov'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='require' name='succor'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='require' name='ibrs'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='require' name='amd-ssbd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='require' name='virt-ssbd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='require' name='lbrv'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='require' name='tsc-scale'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='require' name='vmcb-clean'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='require' name='flushbyasid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='require' name='pause-filter'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='require' name='pfthreshold'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='require' name='svme-addr-chk'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <feature policy='disable' name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     </mode>
Feb 19 20:04:19 compute-0 nova_compute[187401]:     <mode name='custom' supported='yes'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Broadwell'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Broadwell-IBRS'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Broadwell-noTSX'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Broadwell-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Broadwell-v2'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Broadwell-v3'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Broadwell-v4'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Cascadelake-Server'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Cascadelake-Server-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Cascadelake-Server-v2'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Cascadelake-Server-v3'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Cascadelake-Server-v4'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Cascadelake-Server-v5'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='ClearwaterForest'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni-int16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='bhi-ctrl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='bhi-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='cmpccxadd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ddpd-u'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='intel-psfd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ipred-ctrl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='lam'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='mcdt-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pbrsb-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='prefetchiti'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rrsba-ctrl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='sha512'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='sm3'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='sm4'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='ClearwaterForest-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni-int16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='bhi-ctrl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='bhi-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='cmpccxadd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ddpd-u'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='intel-psfd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ipred-ctrl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='lam'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='mcdt-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pbrsb-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='prefetchiti'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rrsba-ctrl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='sha512'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='sm3'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='sm4'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Cooperlake'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Cooperlake-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Cooperlake-v2'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Denverton'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='mpx'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Denverton-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='mpx'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Denverton-v2'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Denverton-v3'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Dhyana-v2'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='EPYC-Genoa'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amd-psfd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='auto-ibrs'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='stibp-always-on'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='EPYC-Genoa-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amd-psfd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='auto-ibrs'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='stibp-always-on'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='EPYC-Genoa-v2'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amd-psfd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='auto-ibrs'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fs-gs-base-ns'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='perfmon-v2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='stibp-always-on'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='EPYC-Milan'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='EPYC-Milan-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='EPYC-Milan-v2'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amd-psfd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='stibp-always-on'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='EPYC-Milan-v3'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amd-psfd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='stibp-always-on'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='EPYC-Rome'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='EPYC-Rome-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='EPYC-Rome-v2'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='EPYC-Rome-v3'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='EPYC-Turin'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amd-psfd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='auto-ibrs'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vp2intersect'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fs-gs-base-ns'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibpb-brtype'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='perfmon-v2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='prefetchi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='sbpb'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='srso-user-kernel-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='stibp-always-on'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='EPYC-Turin-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amd-psfd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='auto-ibrs'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vp2intersect'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fs-gs-base-ns'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibpb-brtype'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='perfmon-v2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='prefetchi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='sbpb'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='srso-user-kernel-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='stibp-always-on'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='EPYC-v3'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='EPYC-v4'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='EPYC-v5'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='GraniteRapids'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-fp16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-int8'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-tile'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-fp16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrc'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fzrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='mcdt-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pbrsb-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='prefetchiti'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xfd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='GraniteRapids-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-fp16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-int8'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-tile'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-fp16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrc'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fzrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='mcdt-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pbrsb-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='prefetchiti'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xfd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='GraniteRapids-v2'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-fp16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-int8'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-tile'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx10'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx10-128'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx10-256'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx10-512'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-fp16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrc'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fzrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='mcdt-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pbrsb-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='prefetchiti'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xfd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='GraniteRapids-v3'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-fp16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-int8'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-tile'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx10'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx10-128'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx10-256'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx10-512'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-fp16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrc'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fzrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='mcdt-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pbrsb-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='prefetchiti'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xfd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Haswell'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Haswell-IBRS'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Haswell-noTSX'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Haswell-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Haswell-v2'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Haswell-v3'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Haswell-v4'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Icelake-Server'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Icelake-Server-noTSX'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Icelake-Server-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Icelake-Server-v2'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Icelake-Server-v3'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Icelake-Server-v4'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Icelake-Server-v5'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Icelake-Server-v6'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Icelake-Server-v7'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='IvyBridge'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='IvyBridge-IBRS'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='IvyBridge-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='IvyBridge-v2'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='KnightsMill'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-4fmaps'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-4vnniw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512er'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512pf'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='KnightsMill-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-4fmaps'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-4vnniw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512er'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512pf'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Opteron_G4'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fma4'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xop'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Opteron_G4-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fma4'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xop'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Opteron_G5'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fma4'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='tbm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xop'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Opteron_G5-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fma4'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='tbm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xop'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='SapphireRapids'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-int8'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-tile'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-fp16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrc'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fzrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xfd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='SapphireRapids-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-int8'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-tile'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-fp16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrc'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fzrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xfd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='SapphireRapids-v2'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-int8'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-tile'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-fp16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrc'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fzrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xfd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='SapphireRapids-v3'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-int8'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-tile'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-fp16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrc'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fzrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xfd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='SapphireRapids-v4'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-int8'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='amx-tile'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-fp16'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrc'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fzrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xfd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='SierraForest'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='cmpccxadd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='mcdt-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pbrsb-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='SierraForest-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='cmpccxadd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='mcdt-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pbrsb-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='SierraForest-v2'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='bhi-ctrl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='cmpccxadd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='intel-psfd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ipred-ctrl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='lam'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='mcdt-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pbrsb-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rrsba-ctrl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='SierraForest-v3'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-ifma'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='bhi-ctrl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='cmpccxadd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='intel-psfd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ipred-ctrl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='lam'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='mcdt-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pbrsb-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rrsba-ctrl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Skylake-Client'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Skylake-Client-IBRS'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Skylake-Client-v1'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Skylake-Client-v2'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Skylake-Client-v3'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Skylake-Client-v4'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Skylake-Server'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Skylake-Server-IBRS'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 19 20:04:19 compute-0 nova_compute[187401]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:19 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Skylake-Server-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Skylake-Server-v2'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Skylake-Server-v3'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Skylake-Server-v4'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Skylake-Server-v5'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Snowridge'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='core-capability'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='mpx'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='split-lock-detect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Snowridge-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='core-capability'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='mpx'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='split-lock-detect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Snowridge-v2'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='core-capability'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='split-lock-detect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Snowridge-v3'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='core-capability'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='split-lock-detect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Snowridge-v4'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='athlon'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='3dnow'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='3dnowext'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='athlon-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='3dnow'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='3dnowext'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='core2duo'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='core2duo-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='coreduo'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='coreduo-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='n270'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='n270-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='phenom'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='3dnow'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='3dnowext'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='phenom-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='3dnow'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='3dnowext'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </mode>
Feb 19 20:04:20 compute-0 nova_compute[187401]:   </cpu>
Feb 19 20:04:20 compute-0 nova_compute[187401]:   <memoryBacking supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <enum name='sourceType'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <value>file</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <value>anonymous</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <value>memfd</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:   </memoryBacking>
Feb 19 20:04:20 compute-0 nova_compute[187401]:   <devices>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <disk supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='diskDevice'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>disk</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>cdrom</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>floppy</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>lun</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='bus'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>ide</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>fdc</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>scsi</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>virtio</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>usb</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>sata</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='model'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>virtio</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>virtio-transitional</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>virtio-non-transitional</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </disk>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <graphics supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='type'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>vnc</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>egl-headless</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>dbus</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </graphics>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <video supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='modelType'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>vga</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>cirrus</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>virtio</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>none</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>bochs</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>ramfb</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </video>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <hostdev supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='mode'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>subsystem</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='startupPolicy'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>default</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>mandatory</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>requisite</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>optional</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='subsysType'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>usb</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>pci</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>scsi</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='capsType'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='pciBackend'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </hostdev>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <rng supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='model'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>virtio</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>virtio-transitional</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>virtio-non-transitional</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='backendModel'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>random</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>egd</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>builtin</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </rng>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <filesystem supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='driverType'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>path</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>handle</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>virtiofs</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </filesystem>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <tpm supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='model'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>tpm-tis</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>tpm-crb</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='backendModel'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>emulator</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>external</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='backendVersion'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>2.0</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </tpm>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <redirdev supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='bus'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>usb</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </redirdev>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <channel supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='type'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>pty</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>unix</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </channel>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <crypto supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='model'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='type'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>qemu</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='backendModel'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>builtin</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </crypto>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <interface supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='backendType'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>default</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>passt</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </interface>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <panic supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='model'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>isa</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>hyperv</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </panic>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <console supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='type'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>null</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>vc</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>pty</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>dev</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>file</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>pipe</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>stdio</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>udp</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>tcp</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>unix</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>qemu-vdagent</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>dbus</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </console>
Feb 19 20:04:20 compute-0 nova_compute[187401]:   </devices>
Feb 19 20:04:20 compute-0 nova_compute[187401]:   <features>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <gic supported='no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <vmcoreinfo supported='yes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <genid supported='yes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <backingStoreInput supported='yes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <backup supported='yes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <async-teardown supported='yes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <s390-pv supported='no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <ps2 supported='yes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <tdx supported='no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <sev supported='no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <sgx supported='no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <hyperv supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='features'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>relaxed</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>vapic</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>spinlocks</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>vpindex</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>runtime</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>synic</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>stimer</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>reset</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>vendor_id</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>frequencies</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>reenlightenment</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>tlbflush</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>ipi</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>avic</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>emsr_bitmap</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>xmm_input</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <defaults>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <spinlocks>4095</spinlocks>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <stimer_direct>on</stimer_direct>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <tlbflush_direct>on</tlbflush_direct>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <tlbflush_extended>on</tlbflush_extended>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </defaults>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </hyperv>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <launchSecurity supported='no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:   </features>
Feb 19 20:04:20 compute-0 nova_compute[187401]: </domainCapabilities>
Feb 19 20:04:20 compute-0 nova_compute[187401]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 19 20:04:20 compute-0 nova_compute[187401]: 2026-02-19 20:04:19.945 187405 DEBUG nova.virt.libvirt.host [None req-524b8128-a239-47ad-bbfe-7717ad2e776d - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Feb 19 20:04:20 compute-0 nova_compute[187401]: 2026-02-19 20:04:19.951 187405 DEBUG nova.virt.libvirt.host [None req-524b8128-a239-47ad-bbfe-7717ad2e776d - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Feb 19 20:04:20 compute-0 nova_compute[187401]: <domainCapabilities>
Feb 19 20:04:20 compute-0 nova_compute[187401]:   <path>/usr/libexec/qemu-kvm</path>
Feb 19 20:04:20 compute-0 nova_compute[187401]:   <domain>kvm</domain>
Feb 19 20:04:20 compute-0 nova_compute[187401]:   <machine>pc-q35-rhel9.8.0</machine>
Feb 19 20:04:20 compute-0 nova_compute[187401]:   <arch>x86_64</arch>
Feb 19 20:04:20 compute-0 nova_compute[187401]:   <vcpu max='4096'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:   <iothreads supported='yes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:   <os supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <enum name='firmware'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <value>efi</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <loader supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='type'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>rom</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>pflash</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='readonly'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>yes</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>no</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='secure'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>yes</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>no</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </loader>
Feb 19 20:04:20 compute-0 nova_compute[187401]:   </os>
Feb 19 20:04:20 compute-0 nova_compute[187401]:   <cpu>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <mode name='host-passthrough' supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='hostPassthroughMigratable'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>on</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>off</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </mode>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <mode name='maximum' supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='maximumMigratable'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>on</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>off</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </mode>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <mode name='host-model' supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <vendor>AMD</vendor>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='require' name='x2apic'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='require' name='tsc-deadline'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='require' name='hypervisor'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='require' name='tsc_adjust'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='require' name='spec-ctrl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='require' name='stibp'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='require' name='ssbd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='require' name='cmp_legacy'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='require' name='overflow-recov'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='require' name='succor'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='require' name='ibrs'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='require' name='amd-ssbd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='require' name='virt-ssbd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='require' name='lbrv'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='require' name='tsc-scale'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='require' name='vmcb-clean'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='require' name='flushbyasid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='require' name='pause-filter'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='require' name='pfthreshold'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='require' name='svme-addr-chk'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='disable' name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </mode>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <mode name='custom' supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Broadwell'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Broadwell-IBRS'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Broadwell-noTSX'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Broadwell-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Broadwell-v2'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Broadwell-v3'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Broadwell-v4'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Cascadelake-Server'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Cascadelake-Server-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Cascadelake-Server-v2'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Cascadelake-Server-v3'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Cascadelake-Server-v4'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Cascadelake-Server-v5'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='ClearwaterForest'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni-int16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='bhi-ctrl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='bhi-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cmpccxadd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ddpd-u'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='intel-psfd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ipred-ctrl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='lam'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='mcdt-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pbrsb-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='prefetchiti'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rrsba-ctrl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='sha512'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='sm3'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='sm4'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='ClearwaterForest-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni-int16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='bhi-ctrl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='bhi-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cmpccxadd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ddpd-u'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='intel-psfd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ipred-ctrl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='lam'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='mcdt-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pbrsb-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='prefetchiti'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rrsba-ctrl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='sha512'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='sm3'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='sm4'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Cooperlake'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Cooperlake-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Cooperlake-v2'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Denverton'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='mpx'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Denverton-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='mpx'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Denverton-v2'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Denverton-v3'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Dhyana-v2'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='EPYC-Genoa'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amd-psfd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='auto-ibrs'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='stibp-always-on'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='EPYC-Genoa-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amd-psfd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='auto-ibrs'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='stibp-always-on'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='EPYC-Genoa-v2'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amd-psfd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='auto-ibrs'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fs-gs-base-ns'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='perfmon-v2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='stibp-always-on'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='EPYC-Milan'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='EPYC-Milan-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='EPYC-Milan-v2'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amd-psfd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='stibp-always-on'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='EPYC-Milan-v3'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amd-psfd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='stibp-always-on'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='EPYC-Rome'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='EPYC-Rome-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='EPYC-Rome-v2'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='EPYC-Rome-v3'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='EPYC-Turin'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amd-psfd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='auto-ibrs'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vp2intersect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fs-gs-base-ns'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibpb-brtype'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='perfmon-v2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='prefetchi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='sbpb'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='srso-user-kernel-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='stibp-always-on'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='EPYC-Turin-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amd-psfd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='auto-ibrs'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vp2intersect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fs-gs-base-ns'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibpb-brtype'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='perfmon-v2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='prefetchi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='sbpb'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='srso-user-kernel-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='stibp-always-on'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='EPYC-v3'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='EPYC-v4'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='EPYC-v5'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='GraniteRapids'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-fp16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-int8'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-tile'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-fp16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrc'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fzrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='mcdt-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pbrsb-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='prefetchiti'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xfd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='GraniteRapids-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-fp16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-int8'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-tile'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-fp16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrc'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fzrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='mcdt-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pbrsb-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='prefetchiti'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xfd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='GraniteRapids-v2'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-fp16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-int8'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-tile'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx10'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx10-128'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx10-256'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx10-512'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-fp16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrc'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fzrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='mcdt-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pbrsb-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='prefetchiti'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xfd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='GraniteRapids-v3'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-fp16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-int8'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-tile'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx10'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx10-128'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx10-256'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx10-512'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-fp16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrc'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fzrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='mcdt-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pbrsb-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='prefetchiti'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xfd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Haswell'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Haswell-IBRS'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Haswell-noTSX'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Haswell-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Haswell-v2'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Haswell-v3'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Haswell-v4'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Icelake-Server'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Icelake-Server-noTSX'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Icelake-Server-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Icelake-Server-v2'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Icelake-Server-v3'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Icelake-Server-v4'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Icelake-Server-v5'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Icelake-Server-v6'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Icelake-Server-v7'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='IvyBridge'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='IvyBridge-IBRS'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='IvyBridge-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='IvyBridge-v2'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='KnightsMill'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-4fmaps'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-4vnniw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512er'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512pf'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='KnightsMill-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-4fmaps'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-4vnniw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512er'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512pf'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Opteron_G4'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fma4'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xop'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Opteron_G4-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fma4'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xop'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Opteron_G5'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fma4'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='tbm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xop'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Opteron_G5-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fma4'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='tbm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xop'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='SapphireRapids'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-int8'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-tile'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-fp16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrc'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fzrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xfd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='SapphireRapids-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-int8'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-tile'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-fp16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrc'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fzrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xfd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='SapphireRapids-v2'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-int8'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-tile'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-fp16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrc'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fzrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xfd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='SapphireRapids-v3'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-int8'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-tile'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-fp16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrc'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fzrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xfd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='SapphireRapids-v4'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-int8'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-tile'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-fp16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrc'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fzrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xfd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='SierraForest'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cmpccxadd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='mcdt-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pbrsb-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='SierraForest-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cmpccxadd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='mcdt-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pbrsb-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='SierraForest-v2'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='bhi-ctrl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cmpccxadd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='intel-psfd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ipred-ctrl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='lam'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='mcdt-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pbrsb-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rrsba-ctrl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='SierraForest-v3'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='bhi-ctrl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cmpccxadd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='intel-psfd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ipred-ctrl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='lam'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='mcdt-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pbrsb-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rrsba-ctrl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Skylake-Client'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Skylake-Client-IBRS'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Skylake-Client-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Skylake-Client-v2'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Skylake-Client-v3'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Skylake-Client-v4'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Skylake-Server'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Skylake-Server-IBRS'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Skylake-Server-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Skylake-Server-v2'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Skylake-Server-v3'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Skylake-Server-v4'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Skylake-Server-v5'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Snowridge'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='core-capability'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='mpx'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='split-lock-detect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Snowridge-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='core-capability'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='mpx'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='split-lock-detect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Snowridge-v2'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='core-capability'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='split-lock-detect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Snowridge-v3'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='core-capability'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='split-lock-detect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Snowridge-v4'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='athlon'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='3dnow'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='3dnowext'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='athlon-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='3dnow'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='3dnowext'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='core2duo'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='core2duo-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='coreduo'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='coreduo-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='n270'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='n270-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='phenom'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='3dnow'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='3dnowext'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='phenom-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='3dnow'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='3dnowext'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </mode>
Feb 19 20:04:20 compute-0 nova_compute[187401]:   </cpu>
Feb 19 20:04:20 compute-0 nova_compute[187401]:   <memoryBacking supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <enum name='sourceType'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <value>file</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <value>anonymous</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <value>memfd</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:   </memoryBacking>
Feb 19 20:04:20 compute-0 nova_compute[187401]:   <devices>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <disk supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='diskDevice'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>disk</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>cdrom</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>floppy</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>lun</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='bus'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>fdc</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>scsi</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>virtio</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>usb</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>sata</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='model'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>virtio</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>virtio-transitional</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>virtio-non-transitional</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </disk>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <graphics supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='type'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>vnc</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>egl-headless</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>dbus</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </graphics>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <video supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='modelType'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>vga</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>cirrus</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>virtio</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>none</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>bochs</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>ramfb</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </video>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <hostdev supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='mode'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>subsystem</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='startupPolicy'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>default</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>mandatory</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>requisite</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>optional</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='subsysType'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>usb</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>pci</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>scsi</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='capsType'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='pciBackend'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </hostdev>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <rng supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='model'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>virtio</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>virtio-transitional</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>virtio-non-transitional</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='backendModel'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>random</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>egd</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>builtin</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </rng>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <filesystem supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='driverType'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>path</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>handle</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>virtiofs</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </filesystem>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <tpm supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='model'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>tpm-tis</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>tpm-crb</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='backendModel'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>emulator</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>external</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='backendVersion'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>2.0</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </tpm>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <redirdev supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='bus'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>usb</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </redirdev>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <channel supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='type'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>pty</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>unix</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </channel>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <crypto supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='model'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='type'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>qemu</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='backendModel'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>builtin</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </crypto>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <interface supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='backendType'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>default</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>passt</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </interface>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <panic supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='model'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>isa</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>hyperv</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </panic>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <console supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='type'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>null</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>vc</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>pty</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>dev</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>file</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>pipe</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>stdio</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>udp</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>tcp</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>unix</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>qemu-vdagent</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>dbus</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </console>
Feb 19 20:04:20 compute-0 nova_compute[187401]:   </devices>
Feb 19 20:04:20 compute-0 nova_compute[187401]:   <features>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <gic supported='no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <vmcoreinfo supported='yes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <genid supported='yes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <backingStoreInput supported='yes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <backup supported='yes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <async-teardown supported='yes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <s390-pv supported='no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <ps2 supported='yes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <tdx supported='no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <sev supported='no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <sgx supported='no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <hyperv supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='features'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>relaxed</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>vapic</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>spinlocks</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>vpindex</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>runtime</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>synic</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>stimer</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>reset</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>vendor_id</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>frequencies</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>reenlightenment</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>tlbflush</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>ipi</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>avic</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>emsr_bitmap</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>xmm_input</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <defaults>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <spinlocks>4095</spinlocks>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <stimer_direct>on</stimer_direct>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <tlbflush_direct>on</tlbflush_direct>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <tlbflush_extended>on</tlbflush_extended>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </defaults>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </hyperv>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <launchSecurity supported='no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:   </features>
Feb 19 20:04:20 compute-0 nova_compute[187401]: </domainCapabilities>
Feb 19 20:04:20 compute-0 nova_compute[187401]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 19 20:04:20 compute-0 nova_compute[187401]: 2026-02-19 20:04:20.024 187405 DEBUG nova.virt.libvirt.host [None req-524b8128-a239-47ad-bbfe-7717ad2e776d - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Feb 19 20:04:20 compute-0 nova_compute[187401]: <domainCapabilities>
Feb 19 20:04:20 compute-0 nova_compute[187401]:   <path>/usr/libexec/qemu-kvm</path>
Feb 19 20:04:20 compute-0 nova_compute[187401]:   <domain>kvm</domain>
Feb 19 20:04:20 compute-0 nova_compute[187401]:   <machine>pc-i440fx-rhel7.6.0</machine>
Feb 19 20:04:20 compute-0 nova_compute[187401]:   <arch>x86_64</arch>
Feb 19 20:04:20 compute-0 nova_compute[187401]:   <vcpu max='240'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:   <iothreads supported='yes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:   <os supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <enum name='firmware'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <loader supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='type'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>rom</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>pflash</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='readonly'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>yes</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>no</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='secure'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>no</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </loader>
Feb 19 20:04:20 compute-0 nova_compute[187401]:   </os>
Feb 19 20:04:20 compute-0 nova_compute[187401]:   <cpu>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <mode name='host-passthrough' supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='hostPassthroughMigratable'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>on</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>off</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </mode>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <mode name='maximum' supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='maximumMigratable'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>on</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>off</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </mode>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <mode name='host-model' supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <vendor>AMD</vendor>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='require' name='x2apic'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='require' name='tsc-deadline'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='require' name='hypervisor'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='require' name='tsc_adjust'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='require' name='spec-ctrl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='require' name='stibp'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='require' name='ssbd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='require' name='cmp_legacy'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='require' name='overflow-recov'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='require' name='succor'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='require' name='ibrs'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='require' name='amd-ssbd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='require' name='virt-ssbd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='require' name='lbrv'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='require' name='tsc-scale'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='require' name='vmcb-clean'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='require' name='flushbyasid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='require' name='pause-filter'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='require' name='pfthreshold'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='require' name='svme-addr-chk'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <feature policy='disable' name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </mode>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <mode name='custom' supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Broadwell'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Broadwell-IBRS'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Broadwell-noTSX'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Broadwell-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Broadwell-v2'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Broadwell-v3'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Broadwell-v4'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Cascadelake-Server'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Cascadelake-Server-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Cascadelake-Server-v2'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Cascadelake-Server-v3'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Cascadelake-Server-v4'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Cascadelake-Server-v5'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='ClearwaterForest'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni-int16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='bhi-ctrl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='bhi-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cmpccxadd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ddpd-u'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='intel-psfd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ipred-ctrl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='lam'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='mcdt-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pbrsb-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='prefetchiti'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rrsba-ctrl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='sha512'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='sm3'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='sm4'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='ClearwaterForest-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni-int16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='bhi-ctrl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='bhi-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cmpccxadd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ddpd-u'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='intel-psfd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ipred-ctrl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='lam'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='mcdt-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pbrsb-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='prefetchiti'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rrsba-ctrl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='sha512'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='sm3'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='sm4'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Cooperlake'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Cooperlake-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Cooperlake-v2'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Denverton'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='mpx'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Denverton-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='mpx'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Denverton-v2'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Denverton-v3'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Dhyana-v2'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='EPYC-Genoa'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amd-psfd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='auto-ibrs'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='stibp-always-on'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='EPYC-Genoa-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amd-psfd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='auto-ibrs'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='stibp-always-on'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='EPYC-Genoa-v2'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amd-psfd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='auto-ibrs'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fs-gs-base-ns'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='perfmon-v2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='stibp-always-on'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='EPYC-Milan'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='EPYC-Milan-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='EPYC-Milan-v2'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amd-psfd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='stibp-always-on'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='EPYC-Milan-v3'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amd-psfd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='stibp-always-on'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='EPYC-Rome'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='EPYC-Rome-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='EPYC-Rome-v2'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='EPYC-Rome-v3'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='EPYC-Turin'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amd-psfd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='auto-ibrs'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vp2intersect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fs-gs-base-ns'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibpb-brtype'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='perfmon-v2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='prefetchi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='sbpb'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='srso-user-kernel-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='stibp-always-on'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='EPYC-Turin-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amd-psfd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='auto-ibrs'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vp2intersect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fs-gs-base-ns'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibpb-brtype'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='perfmon-v2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='prefetchi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='sbpb'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='srso-user-kernel-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='stibp-always-on'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='EPYC-v3'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='EPYC-v4'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='EPYC-v5'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='GraniteRapids'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-fp16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-int8'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-tile'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-fp16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrc'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fzrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='mcdt-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pbrsb-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='prefetchiti'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xfd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='GraniteRapids-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-fp16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-int8'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-tile'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-fp16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrc'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fzrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='mcdt-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pbrsb-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='prefetchiti'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xfd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='GraniteRapids-v2'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-fp16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-int8'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-tile'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx10'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx10-128'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx10-256'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx10-512'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-fp16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrc'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fzrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='mcdt-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pbrsb-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='prefetchiti'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xfd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='GraniteRapids-v3'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-fp16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-int8'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-tile'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx10'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx10-128'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx10-256'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx10-512'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-fp16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrc'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fzrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='mcdt-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pbrsb-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='prefetchiti'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xfd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Haswell'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Haswell-IBRS'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Haswell-noTSX'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Haswell-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Haswell-v2'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Haswell-v3'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Haswell-v4'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Icelake-Server'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Icelake-Server-noTSX'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Icelake-Server-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Icelake-Server-v2'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Icelake-Server-v3'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Icelake-Server-v4'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Icelake-Server-v5'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Icelake-Server-v6'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Icelake-Server-v7'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='IvyBridge'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='IvyBridge-IBRS'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='IvyBridge-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='IvyBridge-v2'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='KnightsMill'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-4fmaps'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-4vnniw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512er'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512pf'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='KnightsMill-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-4fmaps'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-4vnniw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512er'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512pf'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Opteron_G4'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fma4'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xop'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Opteron_G4-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fma4'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xop'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Opteron_G5'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fma4'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='tbm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xop'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Opteron_G5-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fma4'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='tbm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xop'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='SapphireRapids'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-int8'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-tile'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-fp16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrc'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fzrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xfd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='SapphireRapids-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-int8'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-tile'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-fp16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrc'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fzrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xfd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='SapphireRapids-v2'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-int8'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-tile'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-fp16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrc'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fzrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xfd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='SapphireRapids-v3'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-int8'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-tile'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-fp16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrc'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fzrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xfd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='SapphireRapids-v4'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-int8'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='amx-tile'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-bf16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-fp16'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bitalg'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrc'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fzrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='la57'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='taa-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xfd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='SierraForest'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cmpccxadd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='mcdt-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pbrsb-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='SierraForest-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cmpccxadd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='mcdt-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pbrsb-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='SierraForest-v2'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='bhi-ctrl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cmpccxadd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='intel-psfd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ipred-ctrl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='lam'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='mcdt-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pbrsb-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rrsba-ctrl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='SierraForest-v3'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-ifma'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='bhi-ctrl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cmpccxadd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fbsdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='fsrs'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ibrs-all'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='intel-psfd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ipred-ctrl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='lam'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='mcdt-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pbrsb-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='psdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rrsba-ctrl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='serialize'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vaes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Skylake-Client'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Skylake-Client-IBRS'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Skylake-Client-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Skylake-Client-v2'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Skylake-Client-v3'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Skylake-Client-v4'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Skylake-Server'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Skylake-Server-IBRS'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Skylake-Server-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Skylake-Server-v2'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='hle'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='rtm'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Skylake-Server-v3'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Skylake-Server-v4'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Skylake-Server-v5'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512bw'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512cd'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512dq'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512f'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='avx512vl'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='invpcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pcid'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='pku'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Snowridge'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='core-capability'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='mpx'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='split-lock-detect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Snowridge-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='core-capability'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='mpx'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='split-lock-detect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Snowridge-v2'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='core-capability'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='split-lock-detect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Snowridge-v3'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='core-capability'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='split-lock-detect'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='Snowridge-v4'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='cldemote'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='erms'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='gfni'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdir64b'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='movdiri'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='xsaves'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='athlon'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='3dnow'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='3dnowext'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='athlon-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='3dnow'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='3dnowext'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='core2duo'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='core2duo-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='coreduo'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='coreduo-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='n270'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='n270-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='ss'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='phenom'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='3dnow'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='3dnowext'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <blockers model='phenom-v1'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='3dnow'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <feature name='3dnowext'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </blockers>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </mode>
Feb 19 20:04:20 compute-0 nova_compute[187401]:   </cpu>
Feb 19 20:04:20 compute-0 nova_compute[187401]:   <memoryBacking supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <enum name='sourceType'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <value>file</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <value>anonymous</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <value>memfd</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:   </memoryBacking>
Feb 19 20:04:20 compute-0 nova_compute[187401]:   <devices>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <disk supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='diskDevice'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>disk</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>cdrom</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>floppy</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>lun</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='bus'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>ide</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>fdc</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>scsi</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>virtio</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>usb</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>sata</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='model'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>virtio</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>virtio-transitional</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>virtio-non-transitional</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </disk>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <graphics supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='type'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>vnc</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>egl-headless</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>dbus</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </graphics>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <video supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='modelType'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>vga</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>cirrus</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>virtio</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>none</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>bochs</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>ramfb</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </video>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <hostdev supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='mode'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>subsystem</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='startupPolicy'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>default</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>mandatory</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>requisite</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>optional</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='subsysType'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>usb</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>pci</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>scsi</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='capsType'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='pciBackend'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </hostdev>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <rng supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='model'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>virtio</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>virtio-transitional</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>virtio-non-transitional</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='backendModel'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>random</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>egd</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>builtin</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </rng>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <filesystem supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='driverType'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>path</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>handle</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>virtiofs</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </filesystem>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <tpm supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='model'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>tpm-tis</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>tpm-crb</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='backendModel'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>emulator</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>external</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='backendVersion'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>2.0</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </tpm>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <redirdev supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='bus'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>usb</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </redirdev>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <channel supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='type'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>pty</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>unix</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </channel>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <crypto supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='model'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='type'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>qemu</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='backendModel'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>builtin</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </crypto>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <interface supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='backendType'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>default</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>passt</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </interface>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <panic supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='model'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>isa</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>hyperv</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </panic>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <console supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='type'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>null</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>vc</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>pty</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>dev</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>file</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>pipe</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>stdio</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>udp</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>tcp</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>unix</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>qemu-vdagent</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>dbus</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </console>
Feb 19 20:04:20 compute-0 nova_compute[187401]:   </devices>
Feb 19 20:04:20 compute-0 nova_compute[187401]:   <features>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <gic supported='no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <vmcoreinfo supported='yes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <genid supported='yes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <backingStoreInput supported='yes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <backup supported='yes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <async-teardown supported='yes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <s390-pv supported='no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <ps2 supported='yes'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <tdx supported='no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <sev supported='no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <sgx supported='no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <hyperv supported='yes'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <enum name='features'>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>relaxed</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>vapic</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>spinlocks</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>vpindex</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>runtime</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>synic</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>stimer</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>reset</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>vendor_id</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>frequencies</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>reenlightenment</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>tlbflush</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>ipi</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>avic</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>emsr_bitmap</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <value>xmm_input</value>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </enum>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       <defaults>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <spinlocks>4095</spinlocks>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <stimer_direct>on</stimer_direct>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <tlbflush_direct>on</tlbflush_direct>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <tlbflush_extended>on</tlbflush_extended>
Feb 19 20:04:20 compute-0 nova_compute[187401]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 19 20:04:20 compute-0 nova_compute[187401]:       </defaults>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     </hyperv>
Feb 19 20:04:20 compute-0 nova_compute[187401]:     <launchSecurity supported='no'/>
Feb 19 20:04:20 compute-0 nova_compute[187401]:   </features>
Feb 19 20:04:20 compute-0 nova_compute[187401]: </domainCapabilities>
Feb 19 20:04:20 compute-0 nova_compute[187401]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 19 20:04:20 compute-0 nova_compute[187401]: 2026-02-19 20:04:20.091 187405 DEBUG nova.virt.libvirt.host [None req-524b8128-a239-47ad-bbfe-7717ad2e776d - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Feb 19 20:04:20 compute-0 nova_compute[187401]: 2026-02-19 20:04:20.092 187405 INFO nova.virt.libvirt.host [None req-524b8128-a239-47ad-bbfe-7717ad2e776d - - - - - -] Secure Boot support detected
Feb 19 20:04:20 compute-0 nova_compute[187401]: 2026-02-19 20:04:20.093 187405 INFO nova.virt.libvirt.driver [None req-524b8128-a239-47ad-bbfe-7717ad2e776d - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Feb 19 20:04:20 compute-0 nova_compute[187401]: 2026-02-19 20:04:20.093 187405 INFO nova.virt.libvirt.driver [None req-524b8128-a239-47ad-bbfe-7717ad2e776d - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Feb 19 20:04:20 compute-0 nova_compute[187401]: 2026-02-19 20:04:20.101 187405 DEBUG nova.virt.libvirt.driver [None req-524b8128-a239-47ad-bbfe-7717ad2e776d - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Feb 19 20:04:20 compute-0 sudo[188537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvljxnkstgjffavhaidorncedfresycv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531459.8566349-1424-243972301368651/AnsiballZ_podman_container.py'
Feb 19 20:04:20 compute-0 sudo[188537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:04:20 compute-0 nova_compute[187401]: 2026-02-19 20:04:20.478 187405 INFO nova.virt.node [None req-524b8128-a239-47ad-bbfe-7717ad2e776d - - - - - -] Determined node identity c266959e-952e-41ad-bc2e-56513f39ec2d from /var/lib/nova/compute_id
Feb 19 20:04:20 compute-0 python3.9[188540]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Feb 19 20:04:20 compute-0 sudo[188537]: pam_unix(sudo:session): session closed for user root
Feb 19 20:04:20 compute-0 rsyslogd[1014]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 19 20:04:20 compute-0 nova_compute[187401]: 2026-02-19 20:04:20.710 187405 WARNING nova.compute.manager [None req-524b8128-a239-47ad-bbfe-7717ad2e776d - - - - - -] Compute nodes ['c266959e-952e-41ad-bc2e-56513f39ec2d'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Feb 19 20:04:21 compute-0 sudo[188713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvdbosoiezufpjjfqkblzuelkecnoqbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531460.8213792-1432-63445156452564/AnsiballZ_systemd.py'
Feb 19 20:04:21 compute-0 sudo[188713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:04:21 compute-0 nova_compute[187401]: 2026-02-19 20:04:21.176 187405 INFO nova.compute.manager [None req-524b8128-a239-47ad-bbfe-7717ad2e776d - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Feb 19 20:04:21 compute-0 python3.9[188716]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 19 20:04:21 compute-0 systemd[1]: Stopping nova_compute container...
Feb 19 20:04:21 compute-0 nova_compute[187401]: 2026-02-19 20:04:21.515 187405 INFO oslo_messaging._drivers.amqpdriver [-] No calling threads waiting for msg_id : 7227c46fccbe4f1999b456cc87d48cdc
Feb 19 20:04:21 compute-0 nova_compute[187401]: 2026-02-19 20:04:21.517 187405 DEBUG oslo_concurrency.lockutils [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:04:21 compute-0 nova_compute[187401]: 2026-02-19 20:04:21.517 187405 DEBUG oslo_concurrency.lockutils [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:04:21 compute-0 nova_compute[187401]: 2026-02-19 20:04:21.517 187405 DEBUG oslo_concurrency.lockutils [None req-81853b4c-4eda-442e-bbc5-149199815b7e - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:04:21 compute-0 virtqemud[188195]: libvirt version: 11.10.0, package: 4.el9 (builder@centos.org, 2026-01-29-15:25:17, )
Feb 19 20:04:21 compute-0 virtqemud[188195]: hostname: compute-0
Feb 19 20:04:21 compute-0 virtqemud[188195]: End of file while reading data: Input/output error
Feb 19 20:04:21 compute-0 systemd[1]: libpod-43efb863e131dc5f7c1f7aaf9924eccbcfb9cbb16f77da5a7fb002b6c8513c67.scope: Deactivated successfully.
Feb 19 20:04:21 compute-0 systemd[1]: libpod-43efb863e131dc5f7c1f7aaf9924eccbcfb9cbb16f77da5a7fb002b6c8513c67.scope: Consumed 2.924s CPU time.
Feb 19 20:04:21 compute-0 podman[188720]: 2026-02-19 20:04:21.927505711 +0000 UTC m=+0.484839665 container died 43efb863e131dc5f7c1f7aaf9924eccbcfb9cbb16f77da5a7fb002b6c8513c67 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=nova_compute, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855-2f802de9dd0efc4d8031d89c396d69745a32d3e1fa31b47a0ebc6301102ac31b'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:04:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-43efb863e131dc5f7c1f7aaf9924eccbcfb9cbb16f77da5a7fb002b6c8513c67-userdata-shm.mount: Deactivated successfully.
Feb 19 20:04:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9b8119c102e0d7f1e76dee1291790808ce17bfe48a33ea1fcf8b08f52d445e8-merged.mount: Deactivated successfully.
Feb 19 20:04:21 compute-0 podman[188720]: 2026-02-19 20:04:21.995326715 +0000 UTC m=+0.552660689 container cleanup 43efb863e131dc5f7c1f7aaf9924eccbcfb9cbb16f77da5a7fb002b6c8513c67 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=nova_compute, container_name=nova_compute, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855-2f802de9dd0efc4d8031d89c396d69745a32d3e1fa31b47a0ebc6301102ac31b'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 19 20:04:21 compute-0 podman[188720]: nova_compute
Feb 19 20:04:22 compute-0 podman[188748]: nova_compute
Feb 19 20:04:22 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Feb 19 20:04:22 compute-0 systemd[1]: Stopped nova_compute container.
Feb 19 20:04:22 compute-0 systemd[1]: Starting nova_compute container...
Feb 19 20:04:22 compute-0 systemd[1]: Started libcrun container.
Feb 19 20:04:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b8119c102e0d7f1e76dee1291790808ce17bfe48a33ea1fcf8b08f52d445e8/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Feb 19 20:04:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b8119c102e0d7f1e76dee1291790808ce17bfe48a33ea1fcf8b08f52d445e8/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Feb 19 20:04:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b8119c102e0d7f1e76dee1291790808ce17bfe48a33ea1fcf8b08f52d445e8/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb 19 20:04:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b8119c102e0d7f1e76dee1291790808ce17bfe48a33ea1fcf8b08f52d445e8/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 19 20:04:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b8119c102e0d7f1e76dee1291790808ce17bfe48a33ea1fcf8b08f52d445e8/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Feb 19 20:04:22 compute-0 podman[188761]: 2026-02-19 20:04:22.174743964 +0000 UTC m=+0.097204550 container init 43efb863e131dc5f7c1f7aaf9924eccbcfb9cbb16f77da5a7fb002b6c8513c67 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute, io.buildah.version=1.41.3, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855-2f802de9dd0efc4d8031d89c396d69745a32d3e1fa31b47a0ebc6301102ac31b'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 20:04:22 compute-0 podman[188761]: 2026-02-19 20:04:22.185952498 +0000 UTC m=+0.108413064 container start 43efb863e131dc5f7c1f7aaf9924eccbcfb9cbb16f77da5a7fb002b6c8513c67 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855-2f802de9dd0efc4d8031d89c396d69745a32d3e1fa31b47a0ebc6301102ac31b'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 19 20:04:22 compute-0 nova_compute[188777]: + sudo -E kolla_set_configs
Feb 19 20:04:22 compute-0 podman[188761]: nova_compute
Feb 19 20:04:22 compute-0 systemd[1]: Started nova_compute container.
Feb 19 20:04:22 compute-0 sudo[188713]: pam_unix(sudo:session): session closed for user root
Feb 19 20:04:22 compute-0 nova_compute[188777]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 19 20:04:22 compute-0 nova_compute[188777]: INFO:__main__:Validating config file
Feb 19 20:04:22 compute-0 nova_compute[188777]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 19 20:04:22 compute-0 nova_compute[188777]: INFO:__main__:Copying service configuration files
Feb 19 20:04:22 compute-0 nova_compute[188777]: INFO:__main__:Deleting /etc/nova/nova.conf
Feb 19 20:04:22 compute-0 nova_compute[188777]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf
Feb 19 20:04:22 compute-0 nova_compute[188777]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Feb 19 20:04:22 compute-0 nova_compute[188777]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Feb 19 20:04:22 compute-0 nova_compute[188777]: INFO:__main__:Copying /var/lib/kolla/config_files/src/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Feb 19 20:04:22 compute-0 nova_compute[188777]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Feb 19 20:04:22 compute-0 nova_compute[188777]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 19 20:04:22 compute-0 nova_compute[188777]: INFO:__main__:Copying /var/lib/kolla/config_files/src/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 19 20:04:22 compute-0 nova_compute[188777]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 19 20:04:22 compute-0 nova_compute[188777]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Feb 19 20:04:22 compute-0 nova_compute[188777]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Feb 19 20:04:22 compute-0 nova_compute[188777]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Feb 19 20:04:22 compute-0 nova_compute[188777]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 19 20:04:22 compute-0 nova_compute[188777]: INFO:__main__:Copying /var/lib/kolla/config_files/src/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 19 20:04:22 compute-0 nova_compute[188777]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 19 20:04:22 compute-0 nova_compute[188777]: INFO:__main__:Deleting /etc/ceph
Feb 19 20:04:22 compute-0 nova_compute[188777]: INFO:__main__:Creating directory /etc/ceph
Feb 19 20:04:22 compute-0 nova_compute[188777]: INFO:__main__:Setting permission for /etc/ceph
Feb 19 20:04:22 compute-0 nova_compute[188777]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Feb 19 20:04:22 compute-0 nova_compute[188777]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Feb 19 20:04:22 compute-0 nova_compute[188777]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb 19 20:04:22 compute-0 nova_compute[188777]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Feb 19 20:04:22 compute-0 nova_compute[188777]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-config to /var/lib/nova/.ssh/config
Feb 19 20:04:22 compute-0 nova_compute[188777]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb 19 20:04:22 compute-0 nova_compute[188777]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Feb 19 20:04:22 compute-0 nova_compute[188777]: INFO:__main__:Copying /var/lib/kolla/config_files/src/run-on-host to /usr/sbin/iscsiadm
Feb 19 20:04:22 compute-0 nova_compute[188777]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Feb 19 20:04:22 compute-0 nova_compute[188777]: INFO:__main__:Writing out command to execute
Feb 19 20:04:22 compute-0 nova_compute[188777]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Feb 19 20:04:22 compute-0 nova_compute[188777]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb 19 20:04:22 compute-0 nova_compute[188777]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb 19 20:04:22 compute-0 nova_compute[188777]: ++ cat /run_command
Feb 19 20:04:22 compute-0 nova_compute[188777]: + CMD=nova-compute
Feb 19 20:04:22 compute-0 nova_compute[188777]: + ARGS=
Feb 19 20:04:22 compute-0 nova_compute[188777]: + sudo kolla_copy_cacerts
Feb 19 20:04:22 compute-0 nova_compute[188777]: + [[ ! -n '' ]]
Feb 19 20:04:22 compute-0 nova_compute[188777]: + . kolla_extend_start
Feb 19 20:04:22 compute-0 nova_compute[188777]: Running command: 'nova-compute'
Feb 19 20:04:22 compute-0 nova_compute[188777]: + echo 'Running command: '\''nova-compute'\'''
Feb 19 20:04:22 compute-0 nova_compute[188777]: + umask 0022
Feb 19 20:04:22 compute-0 nova_compute[188777]: + exec nova-compute
Feb 19 20:04:22 compute-0 sudo[188938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxqekvfoanvmaxyhfdwcmqptnirefyah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531462.398782-1441-37092731787881/AnsiballZ_podman_container.py'
Feb 19 20:04:22 compute-0 sudo[188938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:04:22 compute-0 python3.9[188941]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Feb 19 20:04:22 compute-0 systemd[1]: Started libpod-conmon-5c6092b16853ac7e760c3761de839ce33879eaf6a0a7a14222d9b351ef70591b.scope.
Feb 19 20:04:23 compute-0 systemd[1]: Started libcrun container.
Feb 19 20:04:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0195ce007df837be907932c228c59297974c21124547d838d35835a65b98ffbb/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Feb 19 20:04:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0195ce007df837be907932c228c59297974c21124547d838d35835a65b98ffbb/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb 19 20:04:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0195ce007df837be907932c228c59297974c21124547d838d35835a65b98ffbb/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Feb 19 20:04:23 compute-0 podman[188967]: 2026-02-19 20:04:23.046982794 +0000 UTC m=+0.101755757 container init 5c6092b16853ac7e760c3761de839ce33879eaf6a0a7a14222d9b351ef70591b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, config_id=nova_compute_init, org.label-schema.license=GPLv2, tcib_managed=true, container_name=nova_compute_init, org.label-schema.build-date=20260127, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '2f802de9dd0efc4d8031d89c396d69745a32d3e1fa31b47a0ebc6301102ac31b'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 19 20:04:23 compute-0 podman[188967]: 2026-02-19 20:04:23.051451789 +0000 UTC m=+0.106224742 container start 5c6092b16853ac7e760c3761de839ce33879eaf6a0a7a14222d9b351ef70591b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '2f802de9dd0efc4d8031d89c396d69745a32d3e1fa31b47a0ebc6301102ac31b'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 19 20:04:23 compute-0 python3.9[188941]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Feb 19 20:04:23 compute-0 nova_compute_init[188989]: INFO:nova_statedir:Applying nova statedir ownership
Feb 19 20:04:23 compute-0 nova_compute_init[188989]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Feb 19 20:04:23 compute-0 nova_compute_init[188989]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Feb 19 20:04:23 compute-0 nova_compute_init[188989]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Feb 19 20:04:23 compute-0 nova_compute_init[188989]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Feb 19 20:04:23 compute-0 nova_compute_init[188989]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Feb 19 20:04:23 compute-0 nova_compute_init[188989]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Feb 19 20:04:23 compute-0 nova_compute_init[188989]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Feb 19 20:04:23 compute-0 nova_compute_init[188989]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Feb 19 20:04:23 compute-0 nova_compute_init[188989]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Feb 19 20:04:23 compute-0 nova_compute_init[188989]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Feb 19 20:04:23 compute-0 nova_compute_init[188989]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Feb 19 20:04:23 compute-0 nova_compute_init[188989]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Feb 19 20:04:23 compute-0 nova_compute_init[188989]: INFO:nova_statedir:Nova statedir ownership complete
Feb 19 20:04:23 compute-0 systemd[1]: libpod-5c6092b16853ac7e760c3761de839ce33879eaf6a0a7a14222d9b351ef70591b.scope: Deactivated successfully.
Feb 19 20:04:23 compute-0 podman[189003]: 2026-02-19 20:04:23.133417793 +0000 UTC m=+0.026548093 container died 5c6092b16853ac7e760c3761de839ce33879eaf6a0a7a14222d9b351ef70591b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '2f802de9dd0efc4d8031d89c396d69745a32d3e1fa31b47a0ebc6301102ac31b'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=nova_compute_init, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb 19 20:04:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5c6092b16853ac7e760c3761de839ce33879eaf6a0a7a14222d9b351ef70591b-userdata-shm.mount: Deactivated successfully.
Feb 19 20:04:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-0195ce007df837be907932c228c59297974c21124547d838d35835a65b98ffbb-merged.mount: Deactivated successfully.
Feb 19 20:04:23 compute-0 podman[189003]: 2026-02-19 20:04:23.162371064 +0000 UTC m=+0.055501354 container cleanup 5c6092b16853ac7e760c3761de839ce33879eaf6a0a7a14222d9b351ef70591b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=nova_compute_init, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '2f802de9dd0efc4d8031d89c396d69745a32d3e1fa31b47a0ebc6301102ac31b'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 20:04:23 compute-0 systemd[1]: libpod-conmon-5c6092b16853ac7e760c3761de839ce33879eaf6a0a7a14222d9b351ef70591b.scope: Deactivated successfully.
Feb 19 20:04:23 compute-0 sudo[188938]: pam_unix(sudo:session): session closed for user root
Feb 19 20:04:23 compute-0 sshd-session[163822]: Connection closed by 192.168.122.30 port 37108
Feb 19 20:04:23 compute-0 sshd-session[163819]: pam_unix(sshd:session): session closed for user zuul
Feb 19 20:04:23 compute-0 systemd[1]: session-23.scope: Deactivated successfully.
Feb 19 20:04:23 compute-0 systemd[1]: session-23.scope: Consumed 1min 25.228s CPU time.
Feb 19 20:04:23 compute-0 systemd-logind[810]: Session 23 logged out. Waiting for processes to exit.
Feb 19 20:04:23 compute-0 systemd-logind[810]: Removed session 23.
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.107 188781 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.107 188781 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.107 188781 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.108 188781 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.245 188781 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.254 188781 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.255 188781 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.731 188781 INFO nova.virt.driver [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.845 188781 INFO nova.compute.provider_config [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.859 188781 DEBUG oslo_concurrency.lockutils [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.860 188781 DEBUG oslo_concurrency.lockutils [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.860 188781 DEBUG oslo_concurrency.lockutils [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.861 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.861 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.861 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.861 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.861 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.862 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.862 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.862 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.862 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.862 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.863 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.863 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.863 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.863 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.863 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.863 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.864 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.864 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.864 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.864 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.864 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.864 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.865 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.865 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.865 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.865 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.865 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.865 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.866 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.866 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.866 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.866 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.866 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.866 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.866 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.867 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.867 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.867 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.867 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.867 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.867 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.867 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.868 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.868 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.868 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.868 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.868 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.868 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.868 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.869 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.869 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.869 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.869 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.869 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.869 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.870 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.870 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.870 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.870 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.870 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.870 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.870 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.870 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.871 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.871 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.871 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.871 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.871 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.871 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.871 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.872 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.872 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.872 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.872 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.872 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.872 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.872 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.872 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.873 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.873 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.873 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.873 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.873 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.873 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.873 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.874 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.874 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.874 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.874 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.874 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.874 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.874 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.875 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.875 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.875 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.875 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.875 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.875 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.875 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.875 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.876 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.876 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.876 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.876 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.876 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.876 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.877 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.877 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.877 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.877 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.877 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.877 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.878 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.878 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.878 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.878 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.878 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.879 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.879 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.879 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.879 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.879 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.879 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.879 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.880 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.880 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.880 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.880 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.880 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.880 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.880 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.880 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.881 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.881 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.881 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.881 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.881 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.881 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.881 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.882 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.882 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.882 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.882 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.882 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.882 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.882 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.883 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.883 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.883 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.883 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.883 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.883 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.884 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.884 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.884 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.884 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.884 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.884 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.884 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.885 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.885 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.885 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.885 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.885 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.885 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.885 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.886 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.886 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.886 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.886 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.886 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.886 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.886 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.887 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.887 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.887 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.887 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.887 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.887 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.887 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.888 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.888 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.888 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.888 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.888 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.888 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.888 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.889 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.889 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.889 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.889 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.889 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.889 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.890 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.890 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.890 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.890 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.890 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.890 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.890 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.890 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.891 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.891 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.891 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.891 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.891 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.891 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.891 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.892 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.892 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.892 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.892 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.892 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.892 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.892 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.892 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.893 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.893 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.893 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.893 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.893 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.893 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.894 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cinder.os_region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.894 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.894 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.894 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.894 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.894 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.894 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.895 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.895 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.895 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.895 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.895 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.895 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.895 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.896 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.896 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.896 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.896 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.896 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.896 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.896 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.897 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.897 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.897 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.897 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.897 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.897 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.897 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.898 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.898 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.898 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.898 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.898 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.898 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.898 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.899 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.899 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.899 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.899 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.899 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.899 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.899 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.900 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.900 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.900 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.900 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.900 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.900 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.900 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.900 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.901 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.901 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.901 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.901 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.901 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.901 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.901 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.902 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.902 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.902 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.902 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.902 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.902 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.902 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.903 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.903 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.903 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.903 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.903 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.903 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.903 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.903 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.904 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.904 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.904 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.904 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.904 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.904 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.904 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.905 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.905 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.905 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.905 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.905 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.905 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.905 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.906 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.906 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.906 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.906 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.906 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.906 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.906 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.906 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.907 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.907 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.907 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.907 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.907 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.907 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.907 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.908 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.908 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.908 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.908 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.908 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.908 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.909 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.909 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.909 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.909 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.909 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.909 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.910 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.910 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.910 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.910 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.910 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.910 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.910 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.910 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.911 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.911 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.911 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.911 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.911 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.911 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.911 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.912 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.912 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.912 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.912 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.912 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.912 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.913 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.913 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.913 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.913 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.913 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.913 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.913 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.914 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.914 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.914 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.914 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.914 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.914 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.914 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.914 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.915 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.915 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.915 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.915 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.915 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.915 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.915 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.916 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.916 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.916 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.916 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.916 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.916 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.916 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.916 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.917 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.917 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.917 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.917 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.917 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.917 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.917 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.918 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.918 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] barbican.barbican_region_name  = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.918 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.918 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.918 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.918 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.918 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.919 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.919 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.919 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.919 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.919 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.919 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.919 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.919 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.920 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.920 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.920 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.920 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.920 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.920 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.920 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.921 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.921 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.921 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.921 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.921 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.921 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.921 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.922 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.922 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.922 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.922 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.922 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.922 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.922 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.922 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.923 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.923 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.923 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.923 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.923 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.923 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.923 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.923 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.924 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.924 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.924 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.924 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.924 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.924 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.925 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.925 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.925 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.925 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.925 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.925 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.925 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.926 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.926 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.926 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.926 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.926 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.926 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.926 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.927 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.927 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.927 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.927 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.927 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.927 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.927 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.928 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.928 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.928 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.928 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.928 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.928 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.928 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.929 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.929 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.929 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.929 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.929 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.929 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.929 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.930 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.930 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.930 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.930 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.930 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.930 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.931 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.931 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.931 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.931 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.931 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.932 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.932 188781 WARNING oslo_config.cfg [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Feb 19 20:04:24 compute-0 nova_compute[188777]: live_migration_uri is deprecated for removal in favor of two other options that
Feb 19 20:04:24 compute-0 nova_compute[188777]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Feb 19 20:04:24 compute-0 nova_compute[188777]: and ``live_migration_inbound_addr`` respectively.
Feb 19 20:04:24 compute-0 nova_compute[188777]: ).  Its value may be silently ignored in the future.
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.932 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.932 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.932 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.932 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.933 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.933 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.933 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.933 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.933 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.933 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.934 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.934 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.934 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.934 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.934 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.934 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.934 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.935 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.935 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.935 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.935 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.935 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.935 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.935 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.936 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.936 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.936 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.936 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.936 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.936 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.936 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.937 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.937 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.937 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.937 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.937 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.938 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.938 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.938 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.938 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.938 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.938 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.938 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.939 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.939 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.939 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.939 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.939 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.939 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.939 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.940 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.940 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.940 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.940 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.940 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.940 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.940 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.941 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.941 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.941 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.941 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.941 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.941 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.941 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.942 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.942 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.942 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.942 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.942 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.942 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.942 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.943 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.943 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.943 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.943 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.943 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.943 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.943 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.944 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.944 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.944 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.944 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.944 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.944 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.944 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.945 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.945 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.945 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.945 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.945 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.945 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.946 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.946 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.946 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.946 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.946 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.946 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.946 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.947 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.947 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.947 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.947 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.947 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.947 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.947 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.948 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.948 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.948 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.948 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.948 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.948 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.948 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.949 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.949 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.949 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.949 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.949 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.949 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.949 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.950 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.950 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.950 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.950 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.950 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.950 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.950 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.951 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.951 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.951 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.951 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.951 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.951 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.951 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.951 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.952 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.952 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.952 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.952 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.952 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.952 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.953 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.953 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.953 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.953 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.953 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.953 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.953 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.954 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.954 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.954 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.954 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.954 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.954 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.954 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.955 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.955 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.955 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.955 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.955 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.955 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.955 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.956 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.956 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.956 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.956 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.956 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.956 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.956 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.957 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.957 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.957 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.957 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.957 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.957 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.957 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.958 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.958 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.958 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.958 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.958 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.958 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.959 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.959 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.959 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.959 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.959 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.959 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.960 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.960 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.960 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.960 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.960 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.960 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.961 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.961 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.961 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.961 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.961 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.962 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.962 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.962 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.962 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.962 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.962 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.963 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.963 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.963 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.963 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.963 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.963 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.963 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.964 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.964 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.964 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.964 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.964 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.964 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.964 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.965 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.965 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.965 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.965 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.965 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.965 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.966 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.966 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.966 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.966 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.966 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.966 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.966 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.967 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.967 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.967 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.967 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.967 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.967 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.967 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.968 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.968 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.968 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.968 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.968 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.968 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.968 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.969 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.969 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.969 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.969 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.969 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.970 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.970 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.970 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.970 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.970 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.970 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.970 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.971 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.971 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.971 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.971 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.971 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.971 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.971 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.972 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.972 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.972 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.972 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.972 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.972 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.972 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.973 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.973 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.973 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.973 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.973 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.973 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.973 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.974 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.974 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.974 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.974 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.974 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.974 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.974 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.975 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.975 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.975 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.975 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.975 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.975 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.976 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.976 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.976 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.976 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.976 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.976 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.977 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.977 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.977 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.977 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.977 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.978 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.978 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.978 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.978 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.978 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.978 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.979 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.979 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.979 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.979 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.979 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.979 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.979 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.980 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.980 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.980 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.980 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.980 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.980 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.981 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.981 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.981 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.981 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.981 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.982 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.982 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.982 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.982 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.982 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.982 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.983 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.983 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.983 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.983 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.983 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.983 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.984 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.984 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.984 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.984 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.984 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.984 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.984 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.985 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.985 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.985 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.985 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.985 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.985 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.985 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.986 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.986 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.986 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.986 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.986 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.986 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.986 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.986 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.987 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.987 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.987 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.987 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.987 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.987 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.987 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.988 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.988 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.988 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.988 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.988 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.988 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.988 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.988 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.989 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.989 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.989 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.989 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.989 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.989 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.990 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.990 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.990 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.990 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.990 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.990 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.990 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.991 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.991 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.991 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.991 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.991 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.991 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.991 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.992 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.992 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.992 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.992 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.992 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.992 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.992 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.993 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.993 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.993 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.993 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.993 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.993 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.994 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.994 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.994 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.994 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.994 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.994 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.994 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.994 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.995 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.995 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.995 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.995 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.995 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.995 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.995 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.996 188781 DEBUG oslo_service.service [None req-addd0ef1-4be8-4f6f-ac17-4735b5770399 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Feb 19 20:04:24 compute-0 nova_compute[188777]: 2026-02-19 20:04:24.997 188781 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)
Feb 19 20:04:25 compute-0 nova_compute[188777]: 2026-02-19 20:04:25.018 188781 INFO nova.virt.node [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Determined node identity c266959e-952e-41ad-bc2e-56513f39ec2d from /var/lib/nova/compute_id
Feb 19 20:04:25 compute-0 nova_compute[188777]: 2026-02-19 20:04:25.019 188781 DEBUG nova.virt.libvirt.host [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Feb 19 20:04:25 compute-0 nova_compute[188777]: 2026-02-19 20:04:25.020 188781 DEBUG nova.virt.libvirt.host [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Feb 19 20:04:25 compute-0 nova_compute[188777]: 2026-02-19 20:04:25.020 188781 DEBUG nova.virt.libvirt.host [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Feb 19 20:04:25 compute-0 nova_compute[188777]: 2026-02-19 20:04:25.021 188781 DEBUG nova.virt.libvirt.host [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Feb 19 20:04:25 compute-0 nova_compute[188777]: 2026-02-19 20:04:25.034 188781 DEBUG nova.virt.libvirt.host [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f6e201f3790> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Feb 19 20:04:25 compute-0 nova_compute[188777]: 2026-02-19 20:04:25.037 188781 DEBUG nova.virt.libvirt.host [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f6e201f3790> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Feb 19 20:04:25 compute-0 nova_compute[188777]: 2026-02-19 20:04:25.038 188781 INFO nova.virt.libvirt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Connection event '1' reason 'None'
Feb 19 20:04:25 compute-0 nova_compute[188777]: 2026-02-19 20:04:25.042 188781 INFO nova.virt.libvirt.host [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Libvirt host capabilities <capabilities>
Feb 19 20:04:25 compute-0 nova_compute[188777]: 
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <host>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <uuid>ac1ff264-2c2d-4373-8723-bb8a73a49955</uuid>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <cpu>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <arch>x86_64</arch>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model>EPYC-Rome-v4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <vendor>AMD</vendor>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <microcode version='16777317'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <signature family='23' model='49' stepping='0'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <maxphysaddr mode='emulate' bits='40'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature name='x2apic'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature name='tsc-deadline'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature name='osxsave'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature name='hypervisor'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature name='tsc_adjust'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature name='spec-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature name='stibp'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature name='arch-capabilities'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature name='ssbd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature name='cmp_legacy'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature name='topoext'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature name='virt-ssbd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature name='lbrv'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature name='tsc-scale'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature name='vmcb-clean'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature name='pause-filter'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature name='pfthreshold'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature name='svme-addr-chk'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature name='rdctl-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature name='skip-l1dfl-vmentry'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature name='mds-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature name='pschange-mc-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <pages unit='KiB' size='4'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <pages unit='KiB' size='2048'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <pages unit='KiB' size='1048576'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </cpu>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <power_management>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <suspend_mem/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <suspend_disk/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <suspend_hybrid/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </power_management>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <iommu support='no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <migration_features>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <live/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <uri_transports>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <uri_transport>tcp</uri_transport>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <uri_transport>rdma</uri_transport>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </uri_transports>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </migration_features>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <topology>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <cells num='1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <cell id='0'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:           <memory unit='KiB'>7864272</memory>
Feb 19 20:04:25 compute-0 nova_compute[188777]:           <pages unit='KiB' size='4'>1966068</pages>
Feb 19 20:04:25 compute-0 nova_compute[188777]:           <pages unit='KiB' size='2048'>0</pages>
Feb 19 20:04:25 compute-0 nova_compute[188777]:           <pages unit='KiB' size='1048576'>0</pages>
Feb 19 20:04:25 compute-0 nova_compute[188777]:           <distances>
Feb 19 20:04:25 compute-0 nova_compute[188777]:             <sibling id='0' value='10'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:           </distances>
Feb 19 20:04:25 compute-0 nova_compute[188777]:           <cpus num='8'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:           </cpus>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         </cell>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </cells>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </topology>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <cache>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </cache>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <secmodel>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model>selinux</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <doi>0</doi>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </secmodel>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <secmodel>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model>dac</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <doi>0</doi>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <baselabel type='kvm'>+107:+107</baselabel>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <baselabel type='qemu'>+107:+107</baselabel>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </secmodel>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   </host>
Feb 19 20:04:25 compute-0 nova_compute[188777]: 
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <guest>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <os_type>hvm</os_type>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <arch name='i686'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <wordsize>32</wordsize>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <domain type='qemu'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <domain type='kvm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </arch>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <features>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <pae/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <nonpae/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <acpi default='on' toggle='yes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <apic default='on' toggle='no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <cpuselection/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <deviceboot/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <disksnapshot default='on' toggle='no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <externalSnapshot/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </features>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   </guest>
Feb 19 20:04:25 compute-0 nova_compute[188777]: 
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <guest>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <os_type>hvm</os_type>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <arch name='x86_64'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <wordsize>64</wordsize>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <domain type='qemu'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <domain type='kvm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </arch>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <features>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <acpi default='on' toggle='yes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <apic default='on' toggle='no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <cpuselection/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <deviceboot/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <disksnapshot default='on' toggle='no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <externalSnapshot/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </features>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   </guest>
Feb 19 20:04:25 compute-0 nova_compute[188777]: 
Feb 19 20:04:25 compute-0 nova_compute[188777]: </capabilities>
Feb 19 20:04:25 compute-0 nova_compute[188777]: 
Feb 19 20:04:25 compute-0 nova_compute[188777]: 2026-02-19 20:04:25.050 188781 DEBUG nova.virt.libvirt.host [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Feb 19 20:04:25 compute-0 nova_compute[188777]: 2026-02-19 20:04:25.054 188781 DEBUG nova.virt.libvirt.host [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Feb 19 20:04:25 compute-0 nova_compute[188777]: <domainCapabilities>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <path>/usr/libexec/qemu-kvm</path>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <domain>kvm</domain>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <machine>pc-q35-rhel9.8.0</machine>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <arch>i686</arch>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <vcpu max='4096'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <iothreads supported='yes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <os supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <enum name='firmware'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <loader supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='type'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>rom</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>pflash</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='readonly'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>yes</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>no</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='secure'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>no</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </loader>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   </os>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <cpu>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <mode name='host-passthrough' supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='hostPassthroughMigratable'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>on</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>off</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </mode>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <mode name='maximum' supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='maximumMigratable'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>on</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>off</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </mode>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <mode name='host-model' supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <vendor>AMD</vendor>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='x2apic'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='tsc-deadline'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='hypervisor'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='tsc_adjust'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='spec-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='stibp'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='ssbd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='cmp_legacy'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='overflow-recov'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='succor'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='ibrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='amd-ssbd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='virt-ssbd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='lbrv'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='tsc-scale'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='vmcb-clean'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='flushbyasid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='pause-filter'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='pfthreshold'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='svme-addr-chk'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='disable' name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </mode>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <mode name='custom' supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Broadwell'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Broadwell-IBRS'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Broadwell-noTSX'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Broadwell-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Broadwell-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Broadwell-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Broadwell-v4'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Cascadelake-Server'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Cascadelake-Server-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Cascadelake-Server-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Cascadelake-Server-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Cascadelake-Server-v4'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Cascadelake-Server-v5'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='ClearwaterForest'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni-int16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bhi-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bhi-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cmpccxadd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ddpd-u'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='intel-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ipred-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='lam'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mcdt-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pbrsb-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='prefetchiti'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rrsba-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sha512'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sm3'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sm4'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='ClearwaterForest-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni-int16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bhi-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bhi-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cmpccxadd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ddpd-u'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='intel-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ipred-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='lam'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mcdt-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pbrsb-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='prefetchiti'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rrsba-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sha512'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sm3'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sm4'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Cooperlake'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Cooperlake-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Cooperlake-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Denverton'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mpx'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Denverton-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mpx'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Denverton-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Denverton-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Dhyana-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Genoa'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amd-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='auto-ibrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='stibp-always-on'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Genoa-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amd-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='auto-ibrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='stibp-always-on'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Genoa-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amd-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='auto-ibrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fs-gs-base-ns'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='perfmon-v2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='stibp-always-on'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Milan'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Milan-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Milan-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amd-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='stibp-always-on'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Milan-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amd-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='stibp-always-on'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Rome'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Rome-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Rome-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Rome-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Turin'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amd-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='auto-ibrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vp2intersect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fs-gs-base-ns'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibpb-brtype'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='perfmon-v2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='prefetchi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbpb'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='srso-user-kernel-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='stibp-always-on'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Turin-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amd-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='auto-ibrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vp2intersect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fs-gs-base-ns'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibpb-brtype'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='perfmon-v2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='prefetchi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbpb'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='srso-user-kernel-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='stibp-always-on'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-v4'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-v5'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='GraniteRapids'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-tile'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrc'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fzrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mcdt-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pbrsb-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='prefetchiti'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='GraniteRapids-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-tile'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrc'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fzrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mcdt-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pbrsb-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='prefetchiti'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='GraniteRapids-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-tile'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx10'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx10-128'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx10-256'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx10-512'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrc'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fzrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mcdt-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pbrsb-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='prefetchiti'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='GraniteRapids-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-tile'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx10'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx10-128'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx10-256'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx10-512'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrc'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fzrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mcdt-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pbrsb-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='prefetchiti'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Haswell'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Haswell-IBRS'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Haswell-noTSX'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Haswell-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Haswell-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Haswell-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Haswell-v4'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Icelake-Server'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Icelake-Server-noTSX'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Icelake-Server-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Icelake-Server-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Icelake-Server-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Icelake-Server-v4'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Icelake-Server-v5'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Icelake-Server-v6'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Icelake-Server-v7'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='IvyBridge'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='IvyBridge-IBRS'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='IvyBridge-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='IvyBridge-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='KnightsMill'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-4fmaps'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-4vnniw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512er'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512pf'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='KnightsMill-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-4fmaps'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-4vnniw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512er'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512pf'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Opteron_G4'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fma4'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xop'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Opteron_G4-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fma4'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xop'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Opteron_G5'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fma4'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tbm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xop'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Opteron_G5-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fma4'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tbm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xop'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='SapphireRapids'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-tile'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrc'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fzrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='SapphireRapids-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-tile'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrc'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fzrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='SapphireRapids-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-tile'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrc'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fzrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='SapphireRapids-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-tile'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrc'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fzrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='SapphireRapids-v4'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-tile'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrc'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fzrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='SierraForest'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cmpccxadd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mcdt-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pbrsb-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='SierraForest-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cmpccxadd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mcdt-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pbrsb-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='SierraForest-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bhi-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cmpccxadd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='intel-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ipred-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='lam'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mcdt-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pbrsb-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rrsba-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='SierraForest-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bhi-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cmpccxadd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='intel-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ipred-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='lam'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mcdt-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pbrsb-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rrsba-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Client'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Client-IBRS'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Client-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Client-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Client-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Client-v4'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Server'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Server-IBRS'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Server-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Server-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Server-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Server-v4'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Server-v5'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Snowridge'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='core-capability'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mpx'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='split-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Snowridge-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='core-capability'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mpx'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='split-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Snowridge-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='core-capability'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='split-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Snowridge-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='core-capability'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='split-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Snowridge-v4'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='athlon'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='3dnow'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='3dnowext'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='athlon-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='3dnow'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='3dnowext'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='core2duo'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='core2duo-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='coreduo'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='coreduo-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='n270'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='n270-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='phenom'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='3dnow'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='3dnowext'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='phenom-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='3dnow'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='3dnowext'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </mode>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   </cpu>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <memoryBacking supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <enum name='sourceType'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <value>file</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <value>anonymous</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <value>memfd</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   </memoryBacking>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <devices>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <disk supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='diskDevice'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>disk</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>cdrom</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>floppy</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>lun</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='bus'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>fdc</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>scsi</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>virtio</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>usb</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>sata</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='model'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>virtio</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>virtio-transitional</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>virtio-non-transitional</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </disk>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <graphics supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='type'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>vnc</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>egl-headless</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>dbus</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </graphics>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <video supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='modelType'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>vga</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>cirrus</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>virtio</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>none</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>bochs</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>ramfb</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </video>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <hostdev supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='mode'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>subsystem</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='startupPolicy'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>default</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>mandatory</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>requisite</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>optional</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='subsysType'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>usb</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>pci</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>scsi</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='capsType'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='pciBackend'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </hostdev>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <rng supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='model'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>virtio</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>virtio-transitional</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>virtio-non-transitional</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='backendModel'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>random</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>egd</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>builtin</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </rng>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <filesystem supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='driverType'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>path</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>handle</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>virtiofs</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </filesystem>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <tpm supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='model'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>tpm-tis</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>tpm-crb</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='backendModel'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>emulator</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>external</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='backendVersion'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>2.0</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </tpm>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <redirdev supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='bus'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>usb</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </redirdev>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <channel supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='type'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>pty</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>unix</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </channel>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <crypto supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='model'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='type'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>qemu</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='backendModel'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>builtin</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </crypto>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <interface supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='backendType'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>default</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>passt</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </interface>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <panic supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='model'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>isa</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>hyperv</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </panic>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <console supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='type'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>null</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>vc</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>pty</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>dev</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>file</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>pipe</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>stdio</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>udp</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>tcp</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>unix</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>qemu-vdagent</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>dbus</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </console>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   </devices>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <features>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <gic supported='no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <vmcoreinfo supported='yes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <genid supported='yes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <backingStoreInput supported='yes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <backup supported='yes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <async-teardown supported='yes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <s390-pv supported='no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <ps2 supported='yes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <tdx supported='no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <sev supported='no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <sgx supported='no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <hyperv supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='features'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>relaxed</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>vapic</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>spinlocks</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>vpindex</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>runtime</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>synic</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>stimer</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>reset</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>vendor_id</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>frequencies</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>reenlightenment</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>tlbflush</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>ipi</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>avic</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>emsr_bitmap</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>xmm_input</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <defaults>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <spinlocks>4095</spinlocks>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <stimer_direct>on</stimer_direct>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <tlbflush_direct>on</tlbflush_direct>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <tlbflush_extended>on</tlbflush_extended>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </defaults>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </hyperv>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <launchSecurity supported='no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   </features>
Feb 19 20:04:25 compute-0 nova_compute[188777]: </domainCapabilities>
Feb 19 20:04:25 compute-0 nova_compute[188777]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 19 20:04:25 compute-0 nova_compute[188777]: 2026-02-19 20:04:25.058 188781 DEBUG nova.virt.libvirt.volume.mount [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Feb 19 20:04:25 compute-0 nova_compute[188777]: 2026-02-19 20:04:25.062 188781 DEBUG nova.virt.libvirt.host [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Feb 19 20:04:25 compute-0 nova_compute[188777]: <domainCapabilities>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <path>/usr/libexec/qemu-kvm</path>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <domain>kvm</domain>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <machine>pc-i440fx-rhel7.6.0</machine>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <arch>i686</arch>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <vcpu max='240'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <iothreads supported='yes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <os supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <enum name='firmware'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <loader supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='type'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>rom</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>pflash</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='readonly'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>yes</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>no</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='secure'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>no</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </loader>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   </os>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <cpu>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <mode name='host-passthrough' supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='hostPassthroughMigratable'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>on</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>off</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </mode>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <mode name='maximum' supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='maximumMigratable'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>on</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>off</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </mode>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <mode name='host-model' supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <vendor>AMD</vendor>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='x2apic'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='tsc-deadline'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='hypervisor'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='tsc_adjust'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='spec-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='stibp'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='ssbd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='cmp_legacy'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='overflow-recov'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='succor'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='ibrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='amd-ssbd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='virt-ssbd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='lbrv'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='tsc-scale'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='vmcb-clean'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='flushbyasid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='pause-filter'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='pfthreshold'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='svme-addr-chk'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='disable' name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </mode>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <mode name='custom' supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Broadwell'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Broadwell-IBRS'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Broadwell-noTSX'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Broadwell-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Broadwell-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Broadwell-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Broadwell-v4'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Cascadelake-Server'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Cascadelake-Server-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Cascadelake-Server-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Cascadelake-Server-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Cascadelake-Server-v4'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Cascadelake-Server-v5'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='ClearwaterForest'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni-int16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bhi-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bhi-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cmpccxadd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ddpd-u'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='intel-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ipred-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='lam'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mcdt-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pbrsb-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='prefetchiti'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rrsba-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sha512'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sm3'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sm4'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='ClearwaterForest-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni-int16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bhi-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bhi-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cmpccxadd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ddpd-u'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='intel-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ipred-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='lam'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mcdt-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pbrsb-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='prefetchiti'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rrsba-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sha512'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sm3'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sm4'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Cooperlake'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Cooperlake-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Cooperlake-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Denverton'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mpx'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Denverton-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mpx'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Denverton-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Denverton-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Dhyana-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Genoa'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amd-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='auto-ibrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='stibp-always-on'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Genoa-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amd-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='auto-ibrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='stibp-always-on'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Genoa-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amd-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='auto-ibrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fs-gs-base-ns'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='perfmon-v2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='stibp-always-on'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Milan'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Milan-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Milan-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amd-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='stibp-always-on'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Milan-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amd-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='stibp-always-on'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Rome'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Rome-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Rome-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Rome-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Turin'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amd-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='auto-ibrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vp2intersect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fs-gs-base-ns'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibpb-brtype'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='perfmon-v2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='prefetchi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbpb'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='srso-user-kernel-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='stibp-always-on'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Turin-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amd-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='auto-ibrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vp2intersect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fs-gs-base-ns'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibpb-brtype'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='perfmon-v2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='prefetchi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbpb'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='srso-user-kernel-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='stibp-always-on'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-v4'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-v5'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='GraniteRapids'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-tile'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrc'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fzrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mcdt-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pbrsb-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='prefetchiti'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='GraniteRapids-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-tile'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrc'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fzrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mcdt-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pbrsb-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='prefetchiti'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='GraniteRapids-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-tile'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx10'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx10-128'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx10-256'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx10-512'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrc'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fzrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mcdt-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pbrsb-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='prefetchiti'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='GraniteRapids-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-tile'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx10'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx10-128'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx10-256'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx10-512'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrc'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fzrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mcdt-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pbrsb-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='prefetchiti'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Haswell'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Haswell-IBRS'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Haswell-noTSX'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Haswell-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Haswell-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Haswell-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Haswell-v4'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Icelake-Server'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Icelake-Server-noTSX'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Icelake-Server-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Icelake-Server-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Icelake-Server-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Icelake-Server-v4'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Icelake-Server-v5'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Icelake-Server-v6'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Icelake-Server-v7'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='IvyBridge'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='IvyBridge-IBRS'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='IvyBridge-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='IvyBridge-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='KnightsMill'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-4fmaps'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-4vnniw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512er'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512pf'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='KnightsMill-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-4fmaps'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-4vnniw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512er'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512pf'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Opteron_G4'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fma4'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xop'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Opteron_G4-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fma4'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xop'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Opteron_G5'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fma4'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tbm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xop'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Opteron_G5-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fma4'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tbm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xop'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='SapphireRapids'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-tile'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrc'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fzrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='SapphireRapids-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-tile'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrc'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fzrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='SapphireRapids-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-tile'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrc'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fzrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='SapphireRapids-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-tile'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrc'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fzrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='SapphireRapids-v4'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-tile'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrc'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fzrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='SierraForest'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cmpccxadd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mcdt-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pbrsb-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='SierraForest-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cmpccxadd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mcdt-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pbrsb-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='SierraForest-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bhi-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cmpccxadd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='intel-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ipred-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='lam'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mcdt-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pbrsb-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rrsba-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='SierraForest-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bhi-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cmpccxadd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='intel-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ipred-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='lam'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mcdt-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pbrsb-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rrsba-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Client'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Client-IBRS'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Client-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Client-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Client-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Client-v4'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Server'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Server-IBRS'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Server-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Server-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Server-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Server-v4'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Server-v5'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Snowridge'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='core-capability'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mpx'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='split-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Snowridge-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='core-capability'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mpx'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='split-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Snowridge-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='core-capability'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='split-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Snowridge-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='core-capability'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='split-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Snowridge-v4'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='athlon'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='3dnow'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='3dnowext'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='athlon-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='3dnow'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='3dnowext'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='core2duo'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='core2duo-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='coreduo'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='coreduo-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='n270'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='n270-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='phenom'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='3dnow'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='3dnowext'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='phenom-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='3dnow'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='3dnowext'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </mode>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   </cpu>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <memoryBacking supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <enum name='sourceType'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <value>file</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <value>anonymous</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <value>memfd</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   </memoryBacking>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <devices>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <disk supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='diskDevice'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>disk</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>cdrom</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>floppy</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>lun</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='bus'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>ide</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>fdc</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>scsi</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>virtio</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>usb</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>sata</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='model'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>virtio</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>virtio-transitional</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>virtio-non-transitional</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </disk>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <graphics supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='type'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>vnc</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>egl-headless</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>dbus</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </graphics>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <video supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='modelType'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>vga</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>cirrus</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>virtio</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>none</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>bochs</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>ramfb</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </video>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <hostdev supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='mode'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>subsystem</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='startupPolicy'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>default</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>mandatory</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>requisite</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>optional</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='subsysType'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>usb</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>pci</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>scsi</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='capsType'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='pciBackend'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </hostdev>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <rng supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='model'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>virtio</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>virtio-transitional</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>virtio-non-transitional</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='backendModel'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>random</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>egd</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>builtin</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </rng>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <filesystem supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='driverType'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>path</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>handle</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>virtiofs</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </filesystem>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <tpm supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='model'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>tpm-tis</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>tpm-crb</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='backendModel'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>emulator</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>external</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='backendVersion'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>2.0</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </tpm>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <redirdev supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='bus'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>usb</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </redirdev>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <channel supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='type'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>pty</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>unix</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </channel>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <crypto supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='model'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='type'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>qemu</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='backendModel'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>builtin</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </crypto>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <interface supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='backendType'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>default</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>passt</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </interface>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <panic supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='model'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>isa</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>hyperv</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </panic>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <console supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='type'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>null</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>vc</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>pty</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>dev</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>file</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>pipe</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>stdio</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>udp</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>tcp</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>unix</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>qemu-vdagent</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>dbus</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </console>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   </devices>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <features>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <gic supported='no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <vmcoreinfo supported='yes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <genid supported='yes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <backingStoreInput supported='yes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <backup supported='yes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <async-teardown supported='yes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <s390-pv supported='no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <ps2 supported='yes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <tdx supported='no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <sev supported='no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <sgx supported='no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <hyperv supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='features'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>relaxed</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>vapic</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>spinlocks</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>vpindex</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>runtime</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>synic</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>stimer</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>reset</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>vendor_id</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>frequencies</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>reenlightenment</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>tlbflush</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>ipi</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>avic</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>emsr_bitmap</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>xmm_input</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <defaults>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <spinlocks>4095</spinlocks>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <stimer_direct>on</stimer_direct>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <tlbflush_direct>on</tlbflush_direct>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <tlbflush_extended>on</tlbflush_extended>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </defaults>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </hyperv>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <launchSecurity supported='no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   </features>
Feb 19 20:04:25 compute-0 nova_compute[188777]: </domainCapabilities>
Feb 19 20:04:25 compute-0 nova_compute[188777]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 19 20:04:25 compute-0 nova_compute[188777]: 2026-02-19 20:04:25.122 188781 DEBUG nova.virt.libvirt.host [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Feb 19 20:04:25 compute-0 nova_compute[188777]: 2026-02-19 20:04:25.127 188781 DEBUG nova.virt.libvirt.host [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Feb 19 20:04:25 compute-0 nova_compute[188777]: <domainCapabilities>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <path>/usr/libexec/qemu-kvm</path>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <domain>kvm</domain>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <machine>pc-q35-rhel9.8.0</machine>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <arch>x86_64</arch>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <vcpu max='4096'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <iothreads supported='yes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <os supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <enum name='firmware'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <value>efi</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <loader supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='type'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>rom</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>pflash</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='readonly'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>yes</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>no</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='secure'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>yes</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>no</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </loader>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   </os>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <cpu>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <mode name='host-passthrough' supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='hostPassthroughMigratable'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>on</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>off</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </mode>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <mode name='maximum' supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='maximumMigratable'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>on</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>off</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </mode>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <mode name='host-model' supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <vendor>AMD</vendor>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='x2apic'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='tsc-deadline'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='hypervisor'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='tsc_adjust'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='spec-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='stibp'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='ssbd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='cmp_legacy'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='overflow-recov'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='succor'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='ibrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='amd-ssbd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='virt-ssbd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='lbrv'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='tsc-scale'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='vmcb-clean'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='flushbyasid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='pause-filter'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='pfthreshold'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='svme-addr-chk'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='disable' name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </mode>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <mode name='custom' supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Broadwell'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Broadwell-IBRS'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Broadwell-noTSX'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Broadwell-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Broadwell-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Broadwell-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Broadwell-v4'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Cascadelake-Server'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Cascadelake-Server-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Cascadelake-Server-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Cascadelake-Server-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Cascadelake-Server-v4'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Cascadelake-Server-v5'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='ClearwaterForest'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni-int16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bhi-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bhi-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cmpccxadd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ddpd-u'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='intel-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ipred-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='lam'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mcdt-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pbrsb-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='prefetchiti'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rrsba-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sha512'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sm3'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sm4'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='ClearwaterForest-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni-int16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bhi-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bhi-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cmpccxadd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ddpd-u'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='intel-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ipred-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='lam'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mcdt-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pbrsb-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='prefetchiti'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rrsba-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sha512'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sm3'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sm4'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Cooperlake'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Cooperlake-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Cooperlake-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Denverton'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mpx'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Denverton-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mpx'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Denverton-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Denverton-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Dhyana-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Genoa'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amd-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='auto-ibrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='stibp-always-on'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Genoa-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amd-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='auto-ibrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='stibp-always-on'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Genoa-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amd-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='auto-ibrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fs-gs-base-ns'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='perfmon-v2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='stibp-always-on'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Milan'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Milan-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Milan-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amd-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='stibp-always-on'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Milan-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amd-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='stibp-always-on'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Rome'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Rome-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Rome-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Rome-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Turin'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amd-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='auto-ibrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vp2intersect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fs-gs-base-ns'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibpb-brtype'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='perfmon-v2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='prefetchi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbpb'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='srso-user-kernel-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='stibp-always-on'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Turin-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amd-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='auto-ibrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vp2intersect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fs-gs-base-ns'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibpb-brtype'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='perfmon-v2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='prefetchi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbpb'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='srso-user-kernel-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='stibp-always-on'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-v4'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-v5'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='GraniteRapids'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-tile'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrc'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fzrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mcdt-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pbrsb-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='prefetchiti'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='GraniteRapids-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-tile'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrc'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fzrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mcdt-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pbrsb-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='prefetchiti'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='GraniteRapids-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-tile'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx10'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx10-128'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx10-256'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx10-512'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrc'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fzrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mcdt-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pbrsb-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='prefetchiti'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='GraniteRapids-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-tile'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx10'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx10-128'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx10-256'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx10-512'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrc'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fzrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mcdt-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pbrsb-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='prefetchiti'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Haswell'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Haswell-IBRS'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Haswell-noTSX'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Haswell-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Haswell-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Haswell-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Haswell-v4'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Icelake-Server'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Icelake-Server-noTSX'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Icelake-Server-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Icelake-Server-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Icelake-Server-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Icelake-Server-v4'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Icelake-Server-v5'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Icelake-Server-v6'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Icelake-Server-v7'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='IvyBridge'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='IvyBridge-IBRS'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='IvyBridge-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='IvyBridge-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='KnightsMill'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-4fmaps'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-4vnniw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512er'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512pf'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='KnightsMill-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-4fmaps'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-4vnniw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512er'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512pf'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Opteron_G4'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fma4'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xop'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Opteron_G4-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fma4'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xop'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Opteron_G5'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fma4'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tbm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xop'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Opteron_G5-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fma4'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tbm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xop'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='SapphireRapids'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-tile'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrc'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fzrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='SapphireRapids-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-tile'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrc'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fzrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='SapphireRapids-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-tile'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrc'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fzrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='SapphireRapids-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-tile'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrc'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fzrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='SapphireRapids-v4'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-tile'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrc'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fzrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='SierraForest'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cmpccxadd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mcdt-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pbrsb-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='SierraForest-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cmpccxadd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mcdt-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pbrsb-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='SierraForest-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bhi-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cmpccxadd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='intel-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ipred-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='lam'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mcdt-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pbrsb-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rrsba-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='SierraForest-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bhi-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cmpccxadd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='intel-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ipred-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='lam'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mcdt-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pbrsb-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rrsba-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Client'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Client-IBRS'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Client-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Client-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Client-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Client-v4'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Server'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Server-IBRS'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Server-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Server-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Server-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Server-v4'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Server-v5'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Snowridge'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='core-capability'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mpx'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='split-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Snowridge-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='core-capability'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mpx'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='split-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Snowridge-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='core-capability'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='split-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Snowridge-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='core-capability'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='split-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Snowridge-v4'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='athlon'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='3dnow'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='3dnowext'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='athlon-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='3dnow'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='3dnowext'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='core2duo'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='core2duo-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='coreduo'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='coreduo-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='n270'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='n270-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='phenom'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='3dnow'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='3dnowext'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='phenom-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='3dnow'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='3dnowext'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </mode>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   </cpu>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <memoryBacking supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <enum name='sourceType'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <value>file</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <value>anonymous</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <value>memfd</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   </memoryBacking>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <devices>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <disk supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='diskDevice'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>disk</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>cdrom</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>floppy</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>lun</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='bus'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>fdc</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>scsi</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>virtio</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>usb</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>sata</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='model'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>virtio</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>virtio-transitional</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>virtio-non-transitional</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </disk>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <graphics supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='type'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>vnc</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>egl-headless</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>dbus</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </graphics>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <video supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='modelType'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>vga</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>cirrus</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>virtio</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>none</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>bochs</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>ramfb</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </video>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <hostdev supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='mode'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>subsystem</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='startupPolicy'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>default</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>mandatory</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>requisite</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>optional</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='subsysType'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>usb</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>pci</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>scsi</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='capsType'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='pciBackend'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </hostdev>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <rng supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='model'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>virtio</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>virtio-transitional</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>virtio-non-transitional</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='backendModel'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>random</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>egd</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>builtin</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </rng>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <filesystem supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='driverType'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>path</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>handle</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>virtiofs</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </filesystem>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <tpm supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='model'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>tpm-tis</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>tpm-crb</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='backendModel'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>emulator</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>external</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='backendVersion'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>2.0</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </tpm>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <redirdev supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='bus'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>usb</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </redirdev>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <channel supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='type'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>pty</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>unix</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </channel>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <crypto supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='model'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='type'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>qemu</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='backendModel'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>builtin</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </crypto>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <interface supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='backendType'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>default</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>passt</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </interface>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <panic supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='model'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>isa</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>hyperv</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </panic>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <console supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='type'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>null</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>vc</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>pty</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>dev</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>file</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>pipe</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>stdio</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>udp</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>tcp</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>unix</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>qemu-vdagent</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>dbus</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </console>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   </devices>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <features>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <gic supported='no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <vmcoreinfo supported='yes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <genid supported='yes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <backingStoreInput supported='yes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <backup supported='yes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <async-teardown supported='yes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <s390-pv supported='no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <ps2 supported='yes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <tdx supported='no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <sev supported='no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <sgx supported='no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <hyperv supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='features'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>relaxed</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>vapic</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>spinlocks</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>vpindex</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>runtime</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>synic</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>stimer</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>reset</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>vendor_id</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>frequencies</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>reenlightenment</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>tlbflush</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>ipi</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>avic</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>emsr_bitmap</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>xmm_input</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <defaults>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <spinlocks>4095</spinlocks>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <stimer_direct>on</stimer_direct>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <tlbflush_direct>on</tlbflush_direct>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <tlbflush_extended>on</tlbflush_extended>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </defaults>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </hyperv>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <launchSecurity supported='no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   </features>
Feb 19 20:04:25 compute-0 nova_compute[188777]: </domainCapabilities>
Feb 19 20:04:25 compute-0 nova_compute[188777]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 19 20:04:25 compute-0 nova_compute[188777]: 2026-02-19 20:04:25.210 188781 DEBUG nova.virt.libvirt.host [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Feb 19 20:04:25 compute-0 nova_compute[188777]: <domainCapabilities>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <path>/usr/libexec/qemu-kvm</path>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <domain>kvm</domain>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <machine>pc-i440fx-rhel7.6.0</machine>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <arch>x86_64</arch>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <vcpu max='240'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <iothreads supported='yes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <os supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <enum name='firmware'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <loader supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='type'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>rom</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>pflash</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='readonly'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>yes</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>no</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='secure'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>no</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </loader>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   </os>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <cpu>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <mode name='host-passthrough' supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='hostPassthroughMigratable'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>on</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>off</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </mode>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <mode name='maximum' supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='maximumMigratable'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>on</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>off</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </mode>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <mode name='host-model' supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <vendor>AMD</vendor>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='x2apic'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='tsc-deadline'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='hypervisor'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='tsc_adjust'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='spec-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='stibp'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='ssbd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='cmp_legacy'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='overflow-recov'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='succor'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='ibrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='amd-ssbd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='virt-ssbd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='lbrv'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='tsc-scale'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='vmcb-clean'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='flushbyasid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='pause-filter'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='pfthreshold'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='svme-addr-chk'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <feature policy='disable' name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </mode>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <mode name='custom' supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Broadwell'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Broadwell-IBRS'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Broadwell-noTSX'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Broadwell-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Broadwell-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Broadwell-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Broadwell-v4'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Cascadelake-Server'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Cascadelake-Server-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Cascadelake-Server-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Cascadelake-Server-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Cascadelake-Server-v4'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Cascadelake-Server-v5'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='ClearwaterForest'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni-int16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bhi-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bhi-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cmpccxadd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ddpd-u'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='intel-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ipred-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='lam'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mcdt-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pbrsb-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='prefetchiti'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rrsba-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sha512'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sm3'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sm4'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='ClearwaterForest-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni-int16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bhi-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bhi-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cmpccxadd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ddpd-u'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='intel-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ipred-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='lam'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mcdt-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pbrsb-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='prefetchiti'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rrsba-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sha512'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sm3'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sm4'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Cooperlake'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Cooperlake-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Cooperlake-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Denverton'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mpx'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Denverton-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mpx'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Denverton-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Denverton-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Dhyana-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Genoa'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amd-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='auto-ibrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='stibp-always-on'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Genoa-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amd-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='auto-ibrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='stibp-always-on'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Genoa-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amd-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='auto-ibrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fs-gs-base-ns'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='perfmon-v2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='stibp-always-on'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Milan'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Milan-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Milan-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amd-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='stibp-always-on'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Milan-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amd-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='stibp-always-on'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Rome'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Rome-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Rome-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Rome-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Turin'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amd-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='auto-ibrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vp2intersect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fs-gs-base-ns'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibpb-brtype'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='perfmon-v2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='prefetchi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbpb'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='srso-user-kernel-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='stibp-always-on'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-Turin-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amd-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='auto-ibrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vp2intersect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fs-gs-base-ns'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibpb-brtype'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='no-nested-data-bp'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='null-sel-clr-base'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='perfmon-v2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='prefetchi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbpb'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='srso-user-kernel-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='stibp-always-on'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-v4'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='EPYC-v5'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='GraniteRapids'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-tile'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrc'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fzrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mcdt-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pbrsb-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='prefetchiti'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='GraniteRapids-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-tile'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrc'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fzrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mcdt-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pbrsb-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='prefetchiti'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='GraniteRapids-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-tile'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx10'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx10-128'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx10-256'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx10-512'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrc'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fzrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mcdt-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pbrsb-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='prefetchiti'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='GraniteRapids-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-tile'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx10'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx10-128'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx10-256'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx10-512'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrc'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fzrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mcdt-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pbrsb-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='prefetchiti'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Haswell'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Haswell-IBRS'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Haswell-noTSX'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Haswell-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Haswell-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Haswell-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Haswell-v4'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Icelake-Server'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Icelake-Server-noTSX'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Icelake-Server-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Icelake-Server-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Icelake-Server-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Icelake-Server-v4'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Icelake-Server-v5'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Icelake-Server-v6'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Icelake-Server-v7'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='IvyBridge'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='IvyBridge-IBRS'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='IvyBridge-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='IvyBridge-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='KnightsMill'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-4fmaps'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-4vnniw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512er'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512pf'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='KnightsMill-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-4fmaps'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-4vnniw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512er'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512pf'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Opteron_G4'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fma4'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xop'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Opteron_G4-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fma4'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xop'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Opteron_G5'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fma4'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tbm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xop'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Opteron_G5-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fma4'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tbm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xop'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='SapphireRapids'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-tile'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrc'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fzrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='SapphireRapids-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-tile'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrc'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fzrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='SapphireRapids-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-tile'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrc'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fzrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='SapphireRapids-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-tile'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrc'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fzrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='SapphireRapids-v4'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='amx-tile'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-bf16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-fp16'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512-vpopcntdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bitalg'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vbmi2'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrc'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fzrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='la57'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='taa-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='tsx-ldtrk'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='SierraForest'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cmpccxadd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mcdt-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pbrsb-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='SierraForest-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cmpccxadd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mcdt-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pbrsb-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='SierraForest-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bhi-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cmpccxadd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='intel-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ipred-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='lam'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mcdt-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pbrsb-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rrsba-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='SierraForest-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ifma'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-ne-convert'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx-vnni-int8'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bhi-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='bus-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cmpccxadd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fbsdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='fsrs'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ibrs-all'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='intel-psfd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ipred-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='lam'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mcdt-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pbrsb-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='psdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rrsba-ctrl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='sbdr-ssdp-no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='serialize'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vaes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='vpclmulqdq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Client'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Client-IBRS'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Client-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Client-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Client-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Client-v4'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Server'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Server-IBRS'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Server-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Server-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='hle'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='rtm'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Server-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Server-v4'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Skylake-Server-v5'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512bw'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512cd'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512dq'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512f'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='avx512vl'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='invpcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pcid'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='pku'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Snowridge'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='core-capability'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mpx'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='split-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Snowridge-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='core-capability'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='mpx'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='split-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Snowridge-v2'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='core-capability'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='split-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Snowridge-v3'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='core-capability'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='split-lock-detect'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='Snowridge-v4'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='cldemote'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='erms'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='gfni'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdir64b'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='movdiri'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='xsaves'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='athlon'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='3dnow'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='3dnowext'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='athlon-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='3dnow'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='3dnowext'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='core2duo'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='core2duo-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='coreduo'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='coreduo-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='n270'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='n270-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='ss'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='phenom'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='3dnow'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='3dnowext'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <blockers model='phenom-v1'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='3dnow'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <feature name='3dnowext'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </blockers>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </mode>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   </cpu>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <memoryBacking supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <enum name='sourceType'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <value>file</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <value>anonymous</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <value>memfd</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   </memoryBacking>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <devices>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <disk supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='diskDevice'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>disk</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>cdrom</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>floppy</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>lun</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='bus'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>ide</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>fdc</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>scsi</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>virtio</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>usb</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>sata</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='model'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>virtio</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>virtio-transitional</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>virtio-non-transitional</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </disk>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <graphics supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='type'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>vnc</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>egl-headless</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>dbus</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </graphics>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <video supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='modelType'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>vga</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>cirrus</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>virtio</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>none</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>bochs</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>ramfb</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </video>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <hostdev supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='mode'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>subsystem</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='startupPolicy'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>default</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>mandatory</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>requisite</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>optional</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='subsysType'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>usb</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>pci</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>scsi</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='capsType'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='pciBackend'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </hostdev>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <rng supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='model'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>virtio</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>virtio-transitional</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>virtio-non-transitional</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='backendModel'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>random</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>egd</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>builtin</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </rng>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <filesystem supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='driverType'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>path</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>handle</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>virtiofs</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </filesystem>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <tpm supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='model'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>tpm-tis</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>tpm-crb</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='backendModel'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>emulator</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>external</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='backendVersion'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>2.0</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </tpm>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <redirdev supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='bus'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>usb</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </redirdev>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <channel supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='type'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>pty</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>unix</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </channel>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <crypto supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='model'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='type'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>qemu</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='backendModel'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>builtin</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </crypto>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <interface supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='backendType'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>default</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>passt</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </interface>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <panic supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='model'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>isa</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>hyperv</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </panic>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <console supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='type'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>null</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>vc</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>pty</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>dev</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>file</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>pipe</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>stdio</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>udp</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>tcp</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>unix</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>qemu-vdagent</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>dbus</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </console>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   </devices>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   <features>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <gic supported='no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <vmcoreinfo supported='yes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <genid supported='yes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <backingStoreInput supported='yes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <backup supported='yes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <async-teardown supported='yes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <s390-pv supported='no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <ps2 supported='yes'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <tdx supported='no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <sev supported='no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <sgx supported='no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <hyperv supported='yes'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <enum name='features'>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>relaxed</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>vapic</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>spinlocks</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>vpindex</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>runtime</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>synic</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>stimer</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>reset</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>vendor_id</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>frequencies</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>reenlightenment</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>tlbflush</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>ipi</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>avic</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>emsr_bitmap</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <value>xmm_input</value>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </enum>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       <defaults>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <spinlocks>4095</spinlocks>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <stimer_direct>on</stimer_direct>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <tlbflush_direct>on</tlbflush_direct>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <tlbflush_extended>on</tlbflush_extended>
Feb 19 20:04:25 compute-0 nova_compute[188777]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 19 20:04:25 compute-0 nova_compute[188777]:       </defaults>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     </hyperv>
Feb 19 20:04:25 compute-0 nova_compute[188777]:     <launchSecurity supported='no'/>
Feb 19 20:04:25 compute-0 nova_compute[188777]:   </features>
Feb 19 20:04:25 compute-0 nova_compute[188777]: </domainCapabilities>
Feb 19 20:04:25 compute-0 nova_compute[188777]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 19 20:04:25 compute-0 nova_compute[188777]: 2026-02-19 20:04:25.280 188781 DEBUG nova.virt.libvirt.host [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Feb 19 20:04:25 compute-0 nova_compute[188777]: 2026-02-19 20:04:25.281 188781 INFO nova.virt.libvirt.host [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Secure Boot support detected
Feb 19 20:04:25 compute-0 nova_compute[188777]: 2026-02-19 20:04:25.282 188781 INFO nova.virt.libvirt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Feb 19 20:04:25 compute-0 nova_compute[188777]: 2026-02-19 20:04:25.282 188781 INFO nova.virt.libvirt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Feb 19 20:04:25 compute-0 nova_compute[188777]: 2026-02-19 20:04:25.290 188781 DEBUG nova.virt.libvirt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Feb 19 20:04:25 compute-0 nova_compute[188777]: 2026-02-19 20:04:25.314 188781 INFO nova.virt.node [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Determined node identity c266959e-952e-41ad-bc2e-56513f39ec2d from /var/lib/nova/compute_id
Feb 19 20:04:25 compute-0 nova_compute[188777]: 2026-02-19 20:04:25.329 188781 WARNING nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Compute nodes ['c266959e-952e-41ad-bc2e-56513f39ec2d'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Feb 19 20:04:25 compute-0 nova_compute[188777]: 2026-02-19 20:04:25.348 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Feb 19 20:04:25 compute-0 nova_compute[188777]: 2026-02-19 20:04:25.374 188781 WARNING nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Feb 19 20:04:25 compute-0 nova_compute[188777]: 2026-02-19 20:04:25.374 188781 DEBUG oslo_concurrency.lockutils [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:04:25 compute-0 nova_compute[188777]: 2026-02-19 20:04:25.375 188781 DEBUG oslo_concurrency.lockutils [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:04:25 compute-0 nova_compute[188777]: 2026-02-19 20:04:25.375 188781 DEBUG oslo_concurrency.lockutils [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:04:25 compute-0 nova_compute[188777]: 2026-02-19 20:04:25.375 188781 DEBUG nova.compute.resource_tracker [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:04:25 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Feb 19 20:04:25 compute-0 systemd[1]: Started libvirt nodedev daemon.
Feb 19 20:04:25 compute-0 nova_compute[188777]: 2026-02-19 20:04:25.600 188781 WARNING nova.virt.libvirt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:04:25 compute-0 nova_compute[188777]: 2026-02-19 20:04:25.601 188781 DEBUG nova.compute.resource_tracker [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5944MB free_disk=72.49664306640625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:04:25 compute-0 nova_compute[188777]: 2026-02-19 20:04:25.601 188781 DEBUG oslo_concurrency.lockutils [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:04:25 compute-0 nova_compute[188777]: 2026-02-19 20:04:25.602 188781 DEBUG oslo_concurrency.lockutils [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:04:25 compute-0 nova_compute[188777]: 2026-02-19 20:04:25.619 188781 WARNING nova.compute.resource_tracker [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] No compute node record for compute-0.ctlplane.example.com:c266959e-952e-41ad-bc2e-56513f39ec2d: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host c266959e-952e-41ad-bc2e-56513f39ec2d could not be found.
Feb 19 20:04:25 compute-0 nova_compute[188777]: 2026-02-19 20:04:25.643 188781 INFO nova.compute.resource_tracker [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: c266959e-952e-41ad-bc2e-56513f39ec2d
Feb 19 20:04:25 compute-0 nova_compute[188777]: 2026-02-19 20:04:25.705 188781 DEBUG nova.compute.resource_tracker [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:04:25 compute-0 nova_compute[188777]: 2026-02-19 20:04:25.706 188781 DEBUG nova.compute.resource_tracker [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:04:25 compute-0 rsyslogd[1014]: imjournal from <np0005624785:nova_compute>: begin to drop messages due to rate-limiting
Feb 19 20:04:26 compute-0 nova_compute[188777]: 2026-02-19 20:04:26.634 188781 INFO nova.scheduler.client.report [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [req-a5da693f-0dea-4c86-bd83-db4f21d39495] Created resource provider record via placement API for resource provider with UUID c266959e-952e-41ad-bc2e-56513f39ec2d and name compute-0.ctlplane.example.com.
Feb 19 20:04:27 compute-0 nova_compute[188777]: 2026-02-19 20:04:27.024 188781 DEBUG nova.virt.libvirt.host [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Feb 19 20:04:27 compute-0 nova_compute[188777]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Feb 19 20:04:27 compute-0 nova_compute[188777]: 2026-02-19 20:04:27.024 188781 INFO nova.virt.libvirt.host [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] kernel doesn't support AMD SEV
Feb 19 20:04:27 compute-0 nova_compute[188777]: 2026-02-19 20:04:27.024 188781 DEBUG nova.compute.provider_tree [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Updating inventory in ProviderTree for provider c266959e-952e-41ad-bc2e-56513f39ec2d with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 19 20:04:27 compute-0 nova_compute[188777]: 2026-02-19 20:04:27.025 188781 DEBUG nova.virt.libvirt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 19 20:04:27 compute-0 nova_compute[188777]: 2026-02-19 20:04:27.071 188781 DEBUG nova.scheduler.client.report [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Updated inventory for provider c266959e-952e-41ad-bc2e-56513f39ec2d with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Feb 19 20:04:27 compute-0 nova_compute[188777]: 2026-02-19 20:04:27.071 188781 DEBUG nova.compute.provider_tree [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Updating resource provider c266959e-952e-41ad-bc2e-56513f39ec2d generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Feb 19 20:04:27 compute-0 nova_compute[188777]: 2026-02-19 20:04:27.071 188781 DEBUG nova.compute.provider_tree [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Updating inventory in ProviderTree for provider c266959e-952e-41ad-bc2e-56513f39ec2d with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 19 20:04:27 compute-0 nova_compute[188777]: 2026-02-19 20:04:27.153 188781 DEBUG nova.compute.provider_tree [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Updating resource provider c266959e-952e-41ad-bc2e-56513f39ec2d generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Feb 19 20:04:27 compute-0 nova_compute[188777]: 2026-02-19 20:04:27.177 188781 DEBUG nova.compute.resource_tracker [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:04:27 compute-0 nova_compute[188777]: 2026-02-19 20:04:27.177 188781 DEBUG oslo_concurrency.lockutils [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:04:27 compute-0 nova_compute[188777]: 2026-02-19 20:04:27.178 188781 DEBUG nova.service [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Feb 19 20:04:27 compute-0 nova_compute[188777]: 2026-02-19 20:04:27.270 188781 DEBUG nova.service [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Feb 19 20:04:27 compute-0 nova_compute[188777]: 2026-02-19 20:04:27.270 188781 DEBUG nova.servicegroup.drivers.db [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Feb 19 20:04:28 compute-0 sshd-session[189101]: Accepted publickey for zuul from 192.168.122.30 port 52760 ssh2: ECDSA SHA256:U7+XUhHIIKxaxeCtrtx4n7poU9CMVA2TmDaaiHbw4x0
Feb 19 20:04:28 compute-0 systemd-logind[810]: New session 25 of user zuul.
Feb 19 20:04:28 compute-0 systemd[1]: Started Session 25 of User zuul.
Feb 19 20:04:28 compute-0 sshd-session[189101]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 19 20:04:29 compute-0 python3.9[189254]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 19 20:04:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:04:30.409 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:04:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:04:30.410 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:04:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:04:30.410 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:04:30 compute-0 sudo[189408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkeljhpuhgyuoimmwsfrcwrjtetmtxti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531470.169861-31-22887669383937/AnsiballZ_systemd_service.py'
Feb 19 20:04:30 compute-0 sudo[189408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:04:30 compute-0 python3.9[189411]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 19 20:04:30 compute-0 systemd[1]: Reloading.
Feb 19 20:04:31 compute-0 systemd-rc-local-generator[189431]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:04:31 compute-0 systemd-sysv-generator[189438]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:04:31 compute-0 sudo[189408]: pam_unix(sudo:session): session closed for user root
Feb 19 20:04:31 compute-0 python3.9[189603]: ansible-ansible.builtin.service_facts Invoked
Feb 19 20:04:31 compute-0 network[189620]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 19 20:04:31 compute-0 network[189621]: 'network-scripts' will be removed from distribution in near future.
Feb 19 20:04:31 compute-0 network[189622]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 19 20:04:32 compute-0 podman[189636]: 2026-02-19 20:04:32.63237014 +0000 UTC m=+0.047620349 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 19 20:04:34 compute-0 sudo[189912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cesnprnsstrofckueysahnjtcxikngzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531474.3634677-50-114234801454468/AnsiballZ_systemd_service.py'
Feb 19 20:04:34 compute-0 sudo[189912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:04:34 compute-0 python3.9[189915]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 20:04:34 compute-0 sudo[189912]: pam_unix(sudo:session): session closed for user root
Feb 19 20:04:35 compute-0 sudo[190068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbvjbyafzsjhskyjgtpzmfhjguhkvsgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531475.0971982-60-265197387416905/AnsiballZ_file.py'
Feb 19 20:04:35 compute-0 sudo[190068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:04:35 compute-0 python3.9[190071]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:04:35 compute-0 sudo[190068]: pam_unix(sudo:session): session closed for user root
Feb 19 20:04:35 compute-0 rsyslogd[1014]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 19 20:04:35 compute-0 rsyslogd[1014]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 19 20:04:36 compute-0 sudo[190222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-neuszqaohdaeojmoxkylmtkqxofmepex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531475.9734464-68-235834854584746/AnsiballZ_file.py'
Feb 19 20:04:36 compute-0 sudo[190222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:04:36 compute-0 python3.9[190225]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:04:36 compute-0 sudo[190222]: pam_unix(sudo:session): session closed for user root
Feb 19 20:04:36 compute-0 sshd-session[189916]: Received disconnect from 103.213.238.91 port 52376:11: Bye Bye [preauth]
Feb 19 20:04:36 compute-0 sshd-session[189916]: Disconnected from authenticating user root 103.213.238.91 port 52376 [preauth]
Feb 19 20:04:36 compute-0 sudo[190375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-strzxshqimbmtspkolbrdfscsofaoghx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531476.5960183-77-78204739199904/AnsiballZ_command.py'
Feb 19 20:04:36 compute-0 sudo[190375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:04:37 compute-0 python3.9[190378]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 20:04:37 compute-0 sudo[190375]: pam_unix(sudo:session): session closed for user root
Feb 19 20:04:37 compute-0 python3.9[190530]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 19 20:04:38 compute-0 sudo[190680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esvqswxltciqhepnbfykzqsgyqtrujmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531478.097036-95-45639604834377/AnsiballZ_systemd_service.py'
Feb 19 20:04:38 compute-0 sudo[190680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:04:38 compute-0 python3.9[190683]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 19 20:04:38 compute-0 systemd[1]: Reloading.
Feb 19 20:04:38 compute-0 systemd-rc-local-generator[190706]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:04:38 compute-0 systemd-sysv-generator[190712]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:04:38 compute-0 sudo[190680]: pam_unix(sudo:session): session closed for user root
Feb 19 20:04:39 compute-0 sudo[190876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnusnowbefddtimcbwrpsafhlyoruazi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531479.0846384-103-39005152455188/AnsiballZ_command.py'
Feb 19 20:04:39 compute-0 sudo[190876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:04:39 compute-0 python3.9[190879]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 20:04:39 compute-0 sudo[190876]: pam_unix(sudo:session): session closed for user root
Feb 19 20:04:39 compute-0 sudo[191030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sumpkkmnvspitqoajleackabpgmzbjct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531479.6662922-112-226936937899386/AnsiballZ_file.py'
Feb 19 20:04:39 compute-0 sudo[191030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:04:40 compute-0 python3.9[191033]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:04:40 compute-0 sudo[191030]: pam_unix(sudo:session): session closed for user root
Feb 19 20:04:40 compute-0 python3.9[191183]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 20:04:41 compute-0 sudo[191335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsuyhnvvfhnehallhnkffsgpgcelykui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531481.1184797-128-258986966921647/AnsiballZ_group.py'
Feb 19 20:04:41 compute-0 sudo[191335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:04:41 compute-0 python3.9[191338]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Feb 19 20:04:41 compute-0 sudo[191335]: pam_unix(sudo:session): session closed for user root
Feb 19 20:04:42 compute-0 sudo[191488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tynsyqocvdfgzuctlcgiuxrqmjkhtqoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531481.958473-139-227361625480108/AnsiballZ_getent.py'
Feb 19 20:04:42 compute-0 sudo[191488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:04:42 compute-0 python3.9[191491]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Feb 19 20:04:42 compute-0 sudo[191488]: pam_unix(sudo:session): session closed for user root
Feb 19 20:04:42 compute-0 sudo[191642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxttlvkpffsuujfieuwnejenjunozota ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531482.704119-147-85891741722607/AnsiballZ_group.py'
Feb 19 20:04:42 compute-0 sudo[191642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:04:43 compute-0 python3.9[191645]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 19 20:04:43 compute-0 groupadd[191646]: group added to /etc/group: name=ceilometer, GID=42405
Feb 19 20:04:43 compute-0 groupadd[191646]: group added to /etc/gshadow: name=ceilometer
Feb 19 20:04:43 compute-0 groupadd[191646]: new group: name=ceilometer, GID=42405
Feb 19 20:04:43 compute-0 sudo[191642]: pam_unix(sudo:session): session closed for user root
Feb 19 20:04:43 compute-0 sudo[191801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxxmxvwwlihyfxzmnedtavsttjxdhnsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531483.312883-155-30430020120309/AnsiballZ_user.py'
Feb 19 20:04:43 compute-0 sudo[191801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:04:43 compute-0 python3.9[191804]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb 19 20:04:43 compute-0 useradd[191806]: new user: name=ceilometer, UID=42405, GID=42405, home=/home/ceilometer, shell=/sbin/nologin, from=/dev/pts/1
Feb 19 20:04:43 compute-0 useradd[191806]: add 'ceilometer' to group 'libvirt'
Feb 19 20:04:43 compute-0 useradd[191806]: add 'ceilometer' to shadow group 'libvirt'
Feb 19 20:04:43 compute-0 sudo[191801]: pam_unix(sudo:session): session closed for user root
Feb 19 20:04:45 compute-0 python3.9[191963]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:04:45 compute-0 python3.9[192084]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1771531484.6726248-181-123285380850038/.source.conf _original_basename=ceilometer.conf follow=False checksum=5c6a9288d15d1b05b1484826ce363ad306e9930c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:04:46 compute-0 python3.9[192234]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:04:46 compute-0 podman[192329]: 2026-02-19 20:04:46.393055108 +0000 UTC m=+0.076123464 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 20:04:46 compute-0 python3.9[192367]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1771531485.7450752-181-40698188147620/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:04:46 compute-0 python3.9[192531]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:04:47 compute-0 python3.9[192652]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1771531486.6236954-181-152423032274825/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:04:48 compute-0 python3.9[192802]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 20:04:48 compute-0 python3.9[192954]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 20:04:49 compute-0 python3.9[193106]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:04:49 compute-0 python3.9[193227]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771531488.7793274-240-92368250497547/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:04:50 compute-0 python3.9[193377]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:04:50 compute-0 python3.9[193498]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/openstack_network_exporter.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771531489.7951906-240-41652021560375/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=87dede51a10e22722618c1900db75cb764463d91 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:04:51 compute-0 python3.9[193649]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:04:51 compute-0 python3.9[193771]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/firewall.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771531490.799077-269-30629784521118/.source.yaml _original_basename=firewall.yaml follow=False checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:04:52 compute-0 python3.9[193921]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:04:52 compute-0 python3.9[194042]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1771531491.8779368-285-54124926275552/.source.yaml _original_basename=node_exporter.yaml follow=False checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:04:53 compute-0 python3.9[194192]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:04:53 compute-0 python3.9[194313]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1771531492.923161-300-159916965144237/.source.yaml _original_basename=podman_exporter.yaml follow=False checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:04:54 compute-0 python3.9[194463]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:04:54 compute-0 python3.9[194584]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1771531494.002774-315-226926192559448/.source.yaml _original_basename=ceilometer_prom_exporter.yaml follow=False checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:04:55 compute-0 sudo[194734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zebzhwzlfzxajvwimglhcpbcefpgmdyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531495.1428-330-56046256412098/AnsiballZ_file.py'
Feb 19 20:04:55 compute-0 sudo[194734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:04:55 compute-0 python3.9[194737]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:04:55 compute-0 sudo[194734]: pam_unix(sudo:session): session closed for user root
Feb 19 20:04:55 compute-0 sudo[194887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arqheotglqvblmotbvymweqrydhfnvlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531495.702886-338-79086026997226/AnsiballZ_file.py'
Feb 19 20:04:55 compute-0 sudo[194887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:04:56 compute-0 python3.9[194890]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:04:56 compute-0 sudo[194887]: pam_unix(sudo:session): session closed for user root
Feb 19 20:04:56 compute-0 python3.9[195040]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 20:04:57 compute-0 python3.9[195192]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 20:04:57 compute-0 python3.9[195344]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 20:04:58 compute-0 sudo[195496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pilveygikbnzynayhuzontoqrljxhpuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531497.9489484-370-65735706554504/AnsiballZ_file.py'
Feb 19 20:04:58 compute-0 sudo[195496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:04:58 compute-0 python3.9[195499]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:04:58 compute-0 sudo[195496]: pam_unix(sudo:session): session closed for user root
Feb 19 20:04:58 compute-0 sudo[195649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aobbpnccdjlkvpxtfugapphsiauhaeop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531498.5278602-378-251487753106132/AnsiballZ_systemd_service.py'
Feb 19 20:04:58 compute-0 sudo[195649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:04:59 compute-0 python3.9[195652]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 20:04:59 compute-0 systemd[1]: Reloading.
Feb 19 20:04:59 compute-0 systemd-rc-local-generator[195684]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:04:59 compute-0 systemd-sysv-generator[195691]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:04:59 compute-0 systemd[1]: Listening on Podman API Socket.
Feb 19 20:04:59 compute-0 sudo[195649]: pam_unix(sudo:session): session closed for user root
Feb 19 20:04:59 compute-0 sudo[195849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbfscjuntgkzutyatotnxvzexzvdwlzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531499.642456-387-233046103168872/AnsiballZ_stat.py'
Feb 19 20:04:59 compute-0 sudo[195849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:00 compute-0 python3.9[195852]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:05:00 compute-0 sudo[195849]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:00 compute-0 sudo[195973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyjfreqynvkrjzvgizrqnynepetwwfon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531499.642456-387-233046103168872/AnsiballZ_copy.py'
Feb 19 20:05:00 compute-0 sudo[195973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:00 compute-0 sshd-session[195656]: Invalid user minecraft from 103.250.11.249 port 41162
Feb 19 20:05:00 compute-0 python3.9[195976]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771531499.642456-387-233046103168872/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:05:00 compute-0 sudo[195973]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:00 compute-0 sshd-session[195656]: Received disconnect from 103.250.11.249 port 41162:11: Bye Bye [preauth]
Feb 19 20:05:00 compute-0 sshd-session[195656]: Disconnected from invalid user minecraft 103.250.11.249 port 41162 [preauth]
Feb 19 20:05:00 compute-0 sudo[196050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbruodjqmnrdcprthhnptcfmdtwpbsdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531499.642456-387-233046103168872/AnsiballZ_stat.py'
Feb 19 20:05:00 compute-0 sudo[196050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:00 compute-0 python3.9[196053]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:05:00 compute-0 sudo[196050]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:01 compute-0 sudo[196174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vllglzipgimhoxzuiwlwlkcujajksffl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531499.642456-387-233046103168872/AnsiballZ_copy.py'
Feb 19 20:05:01 compute-0 sudo[196174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:01 compute-0 python3.9[196177]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771531499.642456-387-233046103168872/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:05:01 compute-0 sudo[196174]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:02 compute-0 sudo[196327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akszrtdxciexsukkrtxhkxnfoeuxpwdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531501.9176657-419-25917190579622/AnsiballZ_file.py'
Feb 19 20:05:02 compute-0 sudo[196327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:02 compute-0 python3.9[196330]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:05:02 compute-0 sudo[196327]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:02 compute-0 sudo[196491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdzezgkyfikfzzafqmclalnmknyfxpku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531502.5322936-427-194156769368891/AnsiballZ_file.py'
Feb 19 20:05:02 compute-0 sudo[196491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:02 compute-0 podman[196454]: 2026-02-19 20:05:02.807810481 +0000 UTC m=+0.066294725 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 19 20:05:02 compute-0 python3.9[196500]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:05:03 compute-0 sudo[196491]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:03 compute-0 sudo[196650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azztttetxndaimjtetllwfhotulxhpja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531503.1778262-435-172504727036985/AnsiballZ_stat.py'
Feb 19 20:05:03 compute-0 sudo[196650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:03 compute-0 python3.9[196653]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:05:03 compute-0 sudo[196650]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:03 compute-0 sudo[196774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfkrsvomkhxfichlflbvdavmvocumyan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531503.1778262-435-172504727036985/AnsiballZ_copy.py'
Feb 19 20:05:03 compute-0 sudo[196774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:04 compute-0 python3.9[196777]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1771531503.1778262-435-172504727036985/.source.json _original_basename=.xp4d4frf follow=False checksum=ce2b0c83293a970bafffa087afa083dd7c93a79c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:05:04 compute-0 sudo[196774]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:04 compute-0 python3.9[196927]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_compute state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:05:06 compute-0 sudo[197348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teunbvpsdcslghuauelnlmbgbgwmvtmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531506.0359135-475-44927573962506/AnsiballZ_container_config_data.py'
Feb 19 20:05:06 compute-0 sudo[197348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:06 compute-0 python3.9[197351]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_compute config_pattern=*.json debug=False
Feb 19 20:05:06 compute-0 sudo[197348]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:07 compute-0 sudo[197501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmcwyclfcuyrefqzefdpkjzvwhmoaxrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531506.9489105-486-207059939396642/AnsiballZ_container_config_hash.py'
Feb 19 20:05:07 compute-0 sudo[197501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:07 compute-0 python3.9[197504]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 19 20:05:07 compute-0 sudo[197501]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:08 compute-0 sudo[197654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyqqzrqxozejzfdqwyqsbyzkbhvuncjj ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771531507.8951313-496-50670131806643/AnsiballZ_edpm_container_manage.py'
Feb 19 20:05:08 compute-0 sudo[197654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:08 compute-0 python3[197657]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ceilometer_agent_compute config_id=ceilometer_agent_compute config_overrides={} config_patterns=*.json containers=['ceilometer_agent_compute'] log_base_path=/var/log/containers/stdouts debug=False
Feb 19 20:05:08 compute-0 sshd-session[193501]: Received disconnect from 103.103.245.7 port 52932:11: Bye Bye [preauth]
Feb 19 20:05:08 compute-0 sshd-session[193501]: Disconnected from authenticating user root 103.103.245.7 port 52932 [preauth]
Feb 19 20:05:09 compute-0 podman[197696]: 2026-02-19 20:05:09.799892586 +0000 UTC m=+0.084254839 container create 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0)
Feb 19 20:05:09 compute-0 podman[197696]: 2026-02-19 20:05:09.751265006 +0000 UTC m=+0.035627309 image pull 39912de8a1671ef27329277267076a00cb1dc71d7f9b7d4bbadf1cbd2c1f36c4 quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Feb 19 20:05:09 compute-0 python3[197657]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535 --healthcheck-command /openstack/healthcheck compute --label config_id=ceilometer_agent_compute --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z --volume /var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested kolla_start
Feb 19 20:05:09 compute-0 sudo[197654]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:10 compute-0 sudo[197884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyfpwxndhulwxexzdvvrjqndxnlpbzjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531510.070729-504-260628803402017/AnsiballZ_stat.py'
Feb 19 20:05:10 compute-0 sudo[197884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:10 compute-0 python3.9[197887]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 20:05:10 compute-0 sudo[197884]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:10 compute-0 sudo[198039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcpshdnbfmhmvowlcjhegauzdxoqwfuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531510.719685-513-195218977835338/AnsiballZ_file.py'
Feb 19 20:05:10 compute-0 sudo[198039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:11 compute-0 python3.9[198042]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:05:11 compute-0 sudo[198039]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:11 compute-0 sudo[198116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbjoykwksxahrypinngjffvqmjkpbjjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531510.719685-513-195218977835338/AnsiballZ_stat.py'
Feb 19 20:05:11 compute-0 sudo[198116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:11 compute-0 python3.9[198119]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 20:05:11 compute-0 sudo[198116]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:11 compute-0 sudo[198268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isqrcjfbsagzmmbcfxfbmswflmxgibew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531511.509859-513-208276556905840/AnsiballZ_copy.py'
Feb 19 20:05:11 compute-0 sudo[198268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:11 compute-0 python3.9[198271]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771531511.509859-513-208276556905840/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:05:11 compute-0 sudo[198268]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:12 compute-0 sudo[198345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppyuciscmfvobarrgwckaqlfjtjvdnfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531511.509859-513-208276556905840/AnsiballZ_systemd.py'
Feb 19 20:05:12 compute-0 sudo[198345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:12 compute-0 python3.9[198348]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 19 20:05:12 compute-0 systemd[1]: Reloading.
Feb 19 20:05:12 compute-0 systemd-rc-local-generator[198374]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:05:12 compute-0 systemd-sysv-generator[198377]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:05:13 compute-0 sudo[198345]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:13 compute-0 sudo[198465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzqioalqpeqcmahbyeeflwcohzskjumx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531511.509859-513-208276556905840/AnsiballZ_systemd.py'
Feb 19 20:05:13 compute-0 sudo[198465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:13 compute-0 python3.9[198468]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 20:05:13 compute-0 systemd[1]: Reloading.
Feb 19 20:05:13 compute-0 systemd-rc-local-generator[198498]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:05:13 compute-0 systemd-sysv-generator[198503]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:05:13 compute-0 systemd[1]: Starting ceilometer_agent_compute container...
Feb 19 20:05:13 compute-0 systemd[1]: Started libcrun container.
Feb 19 20:05:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dd0db8ccc4b7d15b0c85f1bcc2e6db8cfb73c1c9f0abb0fa863ee9824ce8af9/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Feb 19 20:05:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dd0db8ccc4b7d15b0c85f1bcc2e6db8cfb73c1c9f0abb0fa863ee9824ce8af9/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Feb 19 20:05:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dd0db8ccc4b7d15b0c85f1bcc2e6db8cfb73c1c9f0abb0fa863ee9824ce8af9/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Feb 19 20:05:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dd0db8ccc4b7d15b0c85f1bcc2e6db8cfb73c1c9f0abb0fa863ee9824ce8af9/merged/var/lib/kolla/config_files/src supports timestamps until 2038 (0x7fffffff)
Feb 19 20:05:13 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a.
Feb 19 20:05:13 compute-0 podman[198515]: 2026-02-19 20:05:13.931076345 +0000 UTC m=+0.110638996 container init 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260216)
Feb 19 20:05:13 compute-0 ceilometer_agent_compute[198530]: + sudo -E kolla_set_configs
Feb 19 20:05:13 compute-0 sudo[198536]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Feb 19 20:05:13 compute-0 ceilometer_agent_compute[198530]: sudo: unable to send audit message: Operation not permitted
Feb 19 20:05:13 compute-0 sudo[198536]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Feb 19 20:05:13 compute-0 podman[198515]: 2026-02-19 20:05:13.955876111 +0000 UTC m=+0.135438782 container start 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 19 20:05:13 compute-0 podman[198515]: ceilometer_agent_compute
Feb 19 20:05:13 compute-0 systemd[1]: Started ceilometer_agent_compute container.
Feb 19 20:05:13 compute-0 ceilometer_agent_compute[198530]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 19 20:05:13 compute-0 ceilometer_agent_compute[198530]: INFO:__main__:Validating config file
Feb 19 20:05:13 compute-0 ceilometer_agent_compute[198530]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 19 20:05:13 compute-0 ceilometer_agent_compute[198530]: INFO:__main__:Copying service configuration files
Feb 19 20:05:13 compute-0 ceilometer_agent_compute[198530]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Feb 19 20:05:13 compute-0 ceilometer_agent_compute[198530]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Feb 19 20:05:13 compute-0 ceilometer_agent_compute[198530]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Feb 19 20:05:13 compute-0 ceilometer_agent_compute[198530]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Feb 19 20:05:13 compute-0 ceilometer_agent_compute[198530]: INFO:__main__:Copying /var/lib/kolla/config_files/src/polling.yaml to /etc/ceilometer/polling.yaml
Feb 19 20:05:13 compute-0 ceilometer_agent_compute[198530]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Feb 19 20:05:13 compute-0 ceilometer_agent_compute[198530]: INFO:__main__:Copying /var/lib/kolla/config_files/src/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Feb 19 20:05:13 compute-0 ceilometer_agent_compute[198530]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Feb 19 20:05:13 compute-0 ceilometer_agent_compute[198530]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Feb 19 20:05:13 compute-0 ceilometer_agent_compute[198530]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Feb 19 20:05:13 compute-0 ceilometer_agent_compute[198530]: INFO:__main__:Writing out command to execute
Feb 19 20:05:13 compute-0 sudo[198465]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:14 compute-0 sudo[198536]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: ++ cat /run_command
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: + ARGS=
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: + sudo kolla_copy_cacerts
Feb 19 20:05:14 compute-0 podman[198537]: 2026-02-19 20:05:14.007099205 +0000 UTC m=+0.043885087 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:05:14 compute-0 systemd[1]: 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a-585e0d62bfaa16db.service: Main process exited, code=exited, status=1/FAILURE
Feb 19 20:05:14 compute-0 systemd[1]: 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a-585e0d62bfaa16db.service: Failed with result 'exit-code'.
Feb 19 20:05:14 compute-0 sudo[198560]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: sudo: unable to send audit message: Operation not permitted
Feb 19 20:05:14 compute-0 sudo[198560]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Feb 19 20:05:14 compute-0 sudo[198560]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: + [[ ! -n '' ]]
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: + . kolla_extend_start
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: + umask 0022
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Feb 19 20:05:14 compute-0 python3.9[198713]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.750 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.751 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.751 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.751 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.751 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.751 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.751 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.751 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.751 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.751 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.751 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.752 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.752 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.752 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.752 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.752 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.752 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.752 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.752 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.752 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.752 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.753 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.753 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.753 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.753 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.753 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.753 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.753 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.753 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.753 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.753 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.753 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.753 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.753 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.753 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.754 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.754 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.754 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.754 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.754 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.754 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.754 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.754 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.754 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.754 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.754 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.754 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.754 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.754 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.755 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.755 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.755 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.755 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.755 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.755 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.755 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.755 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.755 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.755 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.755 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.755 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.755 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.755 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.756 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.756 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.756 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.756 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.756 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.756 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.756 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.756 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.756 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.756 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.756 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.756 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.756 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.756 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.757 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.757 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.757 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.757 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.757 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.757 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.757 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.757 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.757 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.757 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.757 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.757 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.757 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.758 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.758 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.758 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.758 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.758 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.758 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.758 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.758 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.758 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.758 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.758 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.758 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.758 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.759 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.759 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.759 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.759 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.759 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.759 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.759 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.759 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.759 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.759 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.759 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.759 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.759 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.759 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.760 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.760 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.760 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.760 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.760 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.760 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.760 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.760 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.760 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.760 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.760 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.760 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.760 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.760 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.760 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.761 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.761 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.761 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.761 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.761 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.761 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.761 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.761 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.761 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.761 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.761 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.761 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.761 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.761 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.762 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.762 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.762 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.762 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.762 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.762 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.762 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.782 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.782 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.782 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.782 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.782 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.782 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.783 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.783 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.783 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.783 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.783 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.783 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.783 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.783 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.783 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.783 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.783 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.783 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.783 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.784 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.784 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.784 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.784 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.784 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.784 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.784 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.784 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.784 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.784 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.784 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.784 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.784 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.784 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.784 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.784 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.784 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.784 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.785 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.785 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.785 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.785 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.785 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.785 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.785 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.785 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.785 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.785 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.785 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.785 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.785 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.785 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.785 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.785 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.785 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.786 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.786 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.786 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.786 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.786 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.786 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.786 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.786 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.786 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.786 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.786 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.786 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.786 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.786 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.786 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.786 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.786 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.786 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.786 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.787 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.787 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.787 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.787 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.787 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.787 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.787 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.787 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.787 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.787 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.787 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.787 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.787 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.787 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.787 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.787 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.787 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.787 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.787 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.787 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.788 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.788 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.788 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.788 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.788 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.788 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.788 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.788 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.788 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.788 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.788 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.788 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.788 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.788 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.788 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.788 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.788 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.788 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.788 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.788 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.789 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.789 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.789 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.789 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.789 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.789 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.789 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.789 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.789 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.789 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.789 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.789 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.789 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.789 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.789 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.789 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.789 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.789 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.789 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.790 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.790 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.790 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.790 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.790 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.790 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.790 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.790 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.790 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.790 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.790 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.790 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.790 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.790 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.790 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.790 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.790 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.790 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.790 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.790 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.791 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.791 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.792 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.794 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.794 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.990 15 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.998 15 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.998 15 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Feb 19 20:05:14 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:14.998 15 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.105 15 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.106 15 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.106 15 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.106 15 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.106 15 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.106 15 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.106 15 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.106 15 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.106 15 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.106 15 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.106 15 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.106 15 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.107 15 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.107 15 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.107 15 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.107 15 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.107 15 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.107 15 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.107 15 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.107 15 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.107 15 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.107 15 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.107 15 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.108 15 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.108 15 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.108 15 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.108 15 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.108 15 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.108 15 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.108 15 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.108 15 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.108 15 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.108 15 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.108 15 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.108 15 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.109 15 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.109 15 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.109 15 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.109 15 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.109 15 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.109 15 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.109 15 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.109 15 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.109 15 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.109 15 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.109 15 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.109 15 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.109 15 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.110 15 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.110 15 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.110 15 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.110 15 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.110 15 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.110 15 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.110 15 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.110 15 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.110 15 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.110 15 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.110 15 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.110 15 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.110 15 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.111 15 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.111 15 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.111 15 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.111 15 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.111 15 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.111 15 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.111 15 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.111 15 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.111 15 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.111 15 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.111 15 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.111 15 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.111 15 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.111 15 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.112 15 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.112 15 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.112 15 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.112 15 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.112 15 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.112 15 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.112 15 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.112 15 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.112 15 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.112 15 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.112 15 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.112 15 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.113 15 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.113 15 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.113 15 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.113 15 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.113 15 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.113 15 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.113 15 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.113 15 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.113 15 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.113 15 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.113 15 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.113 15 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.113 15 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.114 15 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.114 15 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.114 15 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.114 15 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.114 15 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.114 15 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.114 15 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.114 15 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.114 15 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.114 15 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.114 15 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.115 15 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.115 15 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.115 15 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.115 15 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.115 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.115 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.115 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.115 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.115 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.115 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.115 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.115 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.115 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.115 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.115 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.116 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.116 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.116 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.116 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.116 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.116 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.116 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.116 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.116 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.116 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.116 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.116 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.116 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.116 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.116 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.116 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.116 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.116 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.116 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.117 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.117 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.117 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.117 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.117 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.117 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.117 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.117 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.117 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.117 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.117 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.118 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.118 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.118 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.118 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.118 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.118 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.118 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.118 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.118 15 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.118 15 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.118 15 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.118 15 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.118 15 DEBUG cotyledon._service [-] Run service AgentManager(0) [15] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.120 15 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.131 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.131 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.131 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8a36e40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.131 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fa4f6728800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.132 15 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.132 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8a36e40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.132 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8a36e40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.132 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8a36e40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.132 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8a36e40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.132 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8a36e40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.132 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8a36e40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.132 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8a36e40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.132 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8a36e40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.133 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8a36e40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.133 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a390>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8a36e40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.133 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8a36e40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.133 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8a36e40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.133 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8a36e40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.133 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8a36e40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.133 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8a36e40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.133 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8a36e40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.133 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8a36e40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.133 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8a36e40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.133 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8a36e40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.133 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8a36e40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.133 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8a36e40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.134 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8a36e40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.134 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8a36e40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.134 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8a36e40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.134 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8a36e40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.135 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.135 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fa4f672a480>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.135 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.135 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fa4f672a180>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.135 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.135 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fa4f672bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.135 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.135 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fa4f672a270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.135 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.135 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fa4f6728ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.136 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.136 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fa4f672a300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.136 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.136 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fa4f672ab70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.136 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.136 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fa4f6728290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.136 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.136 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fa4f69216a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.136 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.136 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fa4f67286b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.136 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.136 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fa4f67283b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.137 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.137 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fa4f672a120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.137 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.137 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fa4f672a1b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.137 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.137 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fa4f6728410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.137 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.137 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fa4f672a150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.137 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.137 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fa4f6728470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.137 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.137 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fa4f68f6030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.137 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.137 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fa4f672ab10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.137 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.138 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fa4f6728500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.138 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.138 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fa4f672a0c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.138 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.138 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fa4f6728560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.138 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.138 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fa4f67285c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.138 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.138 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fa4f6728620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.138 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.138 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fa4f672be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.138 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.138 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fa4f672be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.139 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.139 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.139 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.139 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.139 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.139 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.139 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.139 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.139 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.139 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.139 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.139 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.139 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.139 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.139 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.139 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.140 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.140 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.140 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.140 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.140 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.140 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.140 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.140 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.140 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.140 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:05:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:05:15.140 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:05:15 compute-0 sudo[198876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrtgvmirjnuibauxcknjkvqbevhbrvmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531515.1142523-558-199267829054833/AnsiballZ_stat.py'
Feb 19 20:05:15 compute-0 sudo[198876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:15 compute-0 python3.9[198879]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:05:15 compute-0 sudo[198876]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:15 compute-0 sudo[199002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpxkharbfeleltoypredbahydapinlju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531515.1142523-558-199267829054833/AnsiballZ_copy.py'
Feb 19 20:05:15 compute-0 sudo[199002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:16 compute-0 python3.9[199005]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771531515.1142523-558-199267829054833/.source.yaml _original_basename=.73j9wbzo follow=False checksum=c945c70701108267b88c3ca649b208a4be0c2264 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:05:16 compute-0 sudo[199002]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:16 compute-0 sudo[199155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kffotiefekoevmdgrsmqjnybmtelcyhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531516.209119-573-152306172169194/AnsiballZ_stat.py'
Feb 19 20:05:16 compute-0 sudo[199155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:16 compute-0 podman[199157]: 2026-02-19 20:05:16.577888385 +0000 UTC m=+0.111475483 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:05:16 compute-0 python3.9[199159]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:05:16 compute-0 sudo[199155]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:16 compute-0 sudo[199307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddviwikypegwmbxnkyyokshssruvxqfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531516.209119-573-152306172169194/AnsiballZ_copy.py'
Feb 19 20:05:16 compute-0 sudo[199307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:17 compute-0 python3.9[199310]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771531516.209119-573-152306172169194/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:05:17 compute-0 sudo[199307]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:17 compute-0 sudo[199460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hftgpkdodianuznnslejeqdrptqnpnnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531517.5786917-594-279049653078301/AnsiballZ_file.py'
Feb 19 20:05:17 compute-0 sudo[199460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:17 compute-0 python3.9[199463]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:05:17 compute-0 sudo[199460]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:18 compute-0 sudo[199613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idoikqokvyayploezqzgunftcdthbytc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531518.1258402-602-270534793149608/AnsiballZ_file.py'
Feb 19 20:05:18 compute-0 sudo[199613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:18 compute-0 python3.9[199616]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:05:18 compute-0 sudo[199613]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:18 compute-0 sudo[199766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrchhsduccxgzlgencximpfzuptcjkwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531518.6890438-610-186688619370896/AnsiballZ_stat.py'
Feb 19 20:05:18 compute-0 sudo[199766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:19 compute-0 python3.9[199769]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:05:19 compute-0 sudo[199766]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:19 compute-0 nova_compute[188777]: 2026-02-19 20:05:19.272 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:05:19 compute-0 nova_compute[188777]: 2026-02-19 20:05:19.391 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:05:19 compute-0 sudo[199845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwxshznqtldfezuuxdglohczgorqizzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531518.6890438-610-186688619370896/AnsiballZ_file.py'
Feb 19 20:05:19 compute-0 sudo[199845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:19 compute-0 python3.9[199848]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json _original_basename=.yvxki78t recurse=False state=file path=/var/lib/kolla/config_files/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:05:19 compute-0 sudo[199845]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:20 compute-0 python3.9[199998]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/node_exporter state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:05:21 compute-0 sudo[200419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hniastuuslcwefbmimljypdhbiphqfqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531521.5215247-647-180869058576320/AnsiballZ_container_config_data.py'
Feb 19 20:05:21 compute-0 sudo[200419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:21 compute-0 python3.9[200422]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/node_exporter config_pattern=*.json debug=False
Feb 19 20:05:21 compute-0 sudo[200419]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:22 compute-0 sudo[200572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpljhsbhjhjhpfsefdbogqmjexanqwsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531522.211586-658-130881849728941/AnsiballZ_container_config_hash.py'
Feb 19 20:05:22 compute-0 sudo[200572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:22 compute-0 python3.9[200575]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 19 20:05:22 compute-0 sudo[200572]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:23 compute-0 sudo[200725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcamhwnwvybaeniiphkxzdzskxnpqwpk ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771531522.9333706-668-224488171541440/AnsiballZ_edpm_container_manage.py'
Feb 19 20:05:23 compute-0 sudo[200725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:23 compute-0 python3[200728]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/node_exporter config_id=node_exporter config_overrides={} config_patterns=*.json containers=['node_exporter'] log_base_path=/var/log/containers/stdouts debug=False
Feb 19 20:05:23 compute-0 podman[200764]: 2026-02-19 20:05:23.622147513 +0000 UTC m=+0.070459581 container create fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, config_id=node_exporter, container_name=node_exporter, managed_by=edpm_ansible)
Feb 19 20:05:23 compute-0 podman[200764]: 2026-02-19 20:05:23.570707822 +0000 UTC m=+0.019019910 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Feb 19 20:05:23 compute-0 python3[200728]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535 --healthcheck-command /openstack/healthcheck node_exporter --label config_id=node_exporter --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /:/rootfs:ro --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter:v1.5.0 --web.config.file=/etc/node_exporter/node_exporter.yaml --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl --path.rootfs=/rootfs
Feb 19 20:05:23 compute-0 sudo[200725]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:24 compute-0 sudo[200952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcwyqzdxqaljmxwreyrbyfhgnjxjpchg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531523.873564-676-279461221150291/AnsiballZ_stat.py'
Feb 19 20:05:24 compute-0 sudo[200952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:24 compute-0 nova_compute[188777]: 2026-02-19 20:05:24.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:05:24 compute-0 nova_compute[188777]: 2026-02-19 20:05:24.266 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:05:24 compute-0 nova_compute[188777]: 2026-02-19 20:05:24.266 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:05:24 compute-0 nova_compute[188777]: 2026-02-19 20:05:24.267 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 19 20:05:24 compute-0 python3.9[200955]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 20:05:24 compute-0 sudo[200952]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:24 compute-0 nova_compute[188777]: 2026-02-19 20:05:24.343 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 19 20:05:24 compute-0 nova_compute[188777]: 2026-02-19 20:05:24.343 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:05:24 compute-0 nova_compute[188777]: 2026-02-19 20:05:24.344 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:05:24 compute-0 nova_compute[188777]: 2026-02-19 20:05:24.344 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:05:24 compute-0 nova_compute[188777]: 2026-02-19 20:05:24.345 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:05:24 compute-0 nova_compute[188777]: 2026-02-19 20:05:24.345 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:05:24 compute-0 nova_compute[188777]: 2026-02-19 20:05:24.346 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:05:24 compute-0 nova_compute[188777]: 2026-02-19 20:05:24.346 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:05:24 compute-0 nova_compute[188777]: 2026-02-19 20:05:24.347 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:05:24 compute-0 nova_compute[188777]: 2026-02-19 20:05:24.370 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:05:24 compute-0 nova_compute[188777]: 2026-02-19 20:05:24.371 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:05:24 compute-0 nova_compute[188777]: 2026-02-19 20:05:24.371 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:05:24 compute-0 nova_compute[188777]: 2026-02-19 20:05:24.371 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:05:24 compute-0 nova_compute[188777]: 2026-02-19 20:05:24.501 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:05:24 compute-0 nova_compute[188777]: 2026-02-19 20:05:24.502 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5785MB free_disk=72.49619674682617GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:05:24 compute-0 nova_compute[188777]: 2026-02-19 20:05:24.503 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:05:24 compute-0 nova_compute[188777]: 2026-02-19 20:05:24.503 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:05:24 compute-0 nova_compute[188777]: 2026-02-19 20:05:24.566 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:05:24 compute-0 nova_compute[188777]: 2026-02-19 20:05:24.566 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:05:24 compute-0 nova_compute[188777]: 2026-02-19 20:05:24.588 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:05:24 compute-0 nova_compute[188777]: 2026-02-19 20:05:24.601 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:05:24 compute-0 nova_compute[188777]: 2026-02-19 20:05:24.603 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:05:24 compute-0 nova_compute[188777]: 2026-02-19 20:05:24.603 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:05:24 compute-0 sudo[201107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpqzincapnjgbsrwtkauqcdhgsxlmfiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531524.5541222-685-182521088080952/AnsiballZ_file.py'
Feb 19 20:05:24 compute-0 sudo[201107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:24 compute-0 python3.9[201110]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:05:24 compute-0 sudo[201107]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:25 compute-0 sudo[201184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjbdgvjmesymdqomfcuxytqtpewttoac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531524.5541222-685-182521088080952/AnsiballZ_stat.py'
Feb 19 20:05:25 compute-0 sudo[201184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:25 compute-0 python3.9[201187]: ansible-stat Invoked with path=/etc/systemd/system/edpm_node_exporter_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 20:05:25 compute-0 sudo[201184]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:25 compute-0 sudo[201336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyxgvgtvgnqpycqlbmjmxfoujzlyqbfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531525.3895888-685-180446385413143/AnsiballZ_copy.py'
Feb 19 20:05:25 compute-0 sudo[201336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:25 compute-0 python3.9[201339]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771531525.3895888-685-180446385413143/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:05:25 compute-0 sudo[201336]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:26 compute-0 sudo[201413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvwhzuwkygubewdcskmdelcnavbshhpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531525.3895888-685-180446385413143/AnsiballZ_systemd.py'
Feb 19 20:05:26 compute-0 sudo[201413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:26 compute-0 python3.9[201416]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 19 20:05:26 compute-0 systemd[1]: Reloading.
Feb 19 20:05:26 compute-0 systemd-rc-local-generator[201438]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:05:26 compute-0 systemd-sysv-generator[201441]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:05:26 compute-0 sudo[201413]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:26 compute-0 sudo[201531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzpatesabvxczsqknjsftegijzkcjwbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531525.3895888-685-180446385413143/AnsiballZ_systemd.py'
Feb 19 20:05:26 compute-0 sudo[201531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:27 compute-0 python3.9[201534]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 20:05:27 compute-0 systemd[1]: Reloading.
Feb 19 20:05:27 compute-0 systemd-sysv-generator[201567]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:05:27 compute-0 systemd-rc-local-generator[201564]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:05:27 compute-0 systemd[1]: Starting node_exporter container...
Feb 19 20:05:27 compute-0 systemd[1]: Started libcrun container.
Feb 19 20:05:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db5b1170c7109d5ea9ea24edb755346ae001feefc3907fe211f67a04c6c63e7b/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Feb 19 20:05:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db5b1170c7109d5ea9ea24edb755346ae001feefc3907fe211f67a04c6c63e7b/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Feb 19 20:05:27 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad.
Feb 19 20:05:27 compute-0 podman[201580]: 2026-02-19 20:05:27.600181906 +0000 UTC m=+0.092660320 container init fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.609Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.609Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.609Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.609Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.610Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.610Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.610Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.610Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.610Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.610Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.610Z caller=node_exporter.go:117 level=info collector=arp
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.610Z caller=node_exporter.go:117 level=info collector=bcache
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.610Z caller=node_exporter.go:117 level=info collector=bonding
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.610Z caller=node_exporter.go:117 level=info collector=btrfs
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.610Z caller=node_exporter.go:117 level=info collector=conntrack
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.610Z caller=node_exporter.go:117 level=info collector=cpu
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.610Z caller=node_exporter.go:117 level=info collector=cpufreq
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.610Z caller=node_exporter.go:117 level=info collector=diskstats
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.610Z caller=node_exporter.go:117 level=info collector=edac
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.610Z caller=node_exporter.go:117 level=info collector=fibrechannel
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.610Z caller=node_exporter.go:117 level=info collector=filefd
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.610Z caller=node_exporter.go:117 level=info collector=filesystem
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.610Z caller=node_exporter.go:117 level=info collector=infiniband
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.610Z caller=node_exporter.go:117 level=info collector=ipvs
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.610Z caller=node_exporter.go:117 level=info collector=loadavg
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.610Z caller=node_exporter.go:117 level=info collector=mdadm
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.610Z caller=node_exporter.go:117 level=info collector=meminfo
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.610Z caller=node_exporter.go:117 level=info collector=netclass
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.610Z caller=node_exporter.go:117 level=info collector=netdev
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.610Z caller=node_exporter.go:117 level=info collector=netstat
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.610Z caller=node_exporter.go:117 level=info collector=nfs
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.610Z caller=node_exporter.go:117 level=info collector=nfsd
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.610Z caller=node_exporter.go:117 level=info collector=nvme
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.610Z caller=node_exporter.go:117 level=info collector=schedstat
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.610Z caller=node_exporter.go:117 level=info collector=sockstat
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.610Z caller=node_exporter.go:117 level=info collector=softnet
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.610Z caller=node_exporter.go:117 level=info collector=systemd
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.610Z caller=node_exporter.go:117 level=info collector=tapestats
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.610Z caller=node_exporter.go:117 level=info collector=udp_queues
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.610Z caller=node_exporter.go:117 level=info collector=vmstat
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.610Z caller=node_exporter.go:117 level=info collector=xfs
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.610Z caller=node_exporter.go:117 level=info collector=zfs
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.611Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Feb 19 20:05:27 compute-0 node_exporter[201595]: ts=2026-02-19T20:05:27.611Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Feb 19 20:05:27 compute-0 podman[201580]: 2026-02-19 20:05:27.622129227 +0000 UTC m=+0.114607621 container start fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 19 20:05:27 compute-0 podman[201580]: node_exporter
Feb 19 20:05:27 compute-0 systemd[1]: Started node_exporter container.
Feb 19 20:05:27 compute-0 sudo[201531]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:27 compute-0 podman[201604]: 2026-02-19 20:05:27.698009477 +0000 UTC m=+0.068372665 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 19 20:05:28 compute-0 python3.9[201777]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 19 20:05:28 compute-0 sudo[201927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gppywviogaofavvxmxpagycrlwlxlhyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531528.6353736-730-6852589889998/AnsiballZ_stat.py'
Feb 19 20:05:28 compute-0 sudo[201927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:29 compute-0 python3.9[201930]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:05:29 compute-0 sudo[201927]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:29 compute-0 sudo[202053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inpcauljamjrbysfjszsopalqiqwhkxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531528.6353736-730-6852589889998/AnsiballZ_copy.py'
Feb 19 20:05:29 compute-0 sudo[202053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:29 compute-0 python3.9[202056]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771531528.6353736-730-6852589889998/.source.yaml _original_basename=.rq29hxvu follow=False checksum=9b4b02fd60c255c24c0550d054feffd9e5990c30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:05:29 compute-0 sudo[202053]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:30 compute-0 rsyslogd[1014]: imjournal: 2927 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Feb 19 20:05:30 compute-0 sudo[202206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lubgxydcetnipedbmzowjnxlvragpbtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531529.9316404-745-25368610168549/AnsiballZ_stat.py'
Feb 19 20:05:30 compute-0 sudo[202206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:30 compute-0 python3.9[202209]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:05:30 compute-0 sudo[202206]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:05:30.410 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:05:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:05:30.411 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:05:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:05:30.411 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:05:30 compute-0 sudo[202330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pquuubrorqvoqnhjfiqmsdhgrrrqgivb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531529.9316404-745-25368610168549/AnsiballZ_copy.py'
Feb 19 20:05:30 compute-0 sudo[202330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:30 compute-0 python3.9[202333]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771531529.9316404-745-25368610168549/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:05:30 compute-0 sudo[202330]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:31 compute-0 auditd[717]: Audit daemon rotating log files
Feb 19 20:05:31 compute-0 sudo[202483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huhzsiepfrtjhifdpbzobsoascrvtyat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531531.2518792-766-126521113763608/AnsiballZ_file.py'
Feb 19 20:05:31 compute-0 sudo[202483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:31 compute-0 python3.9[202486]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:05:31 compute-0 sudo[202483]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:32 compute-0 sudo[202636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkarxgohmesuqhuactyrmmznnmqulnyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531531.841198-774-83906203059013/AnsiballZ_file.py'
Feb 19 20:05:32 compute-0 sudo[202636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:32 compute-0 python3.9[202639]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:05:32 compute-0 sudo[202636]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:32 compute-0 sudo[202789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlsrmldjpqbtpsaqmqjljtceglcyualy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531532.352226-782-254242966334344/AnsiballZ_stat.py'
Feb 19 20:05:32 compute-0 sudo[202789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:32 compute-0 python3.9[202792]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:05:32 compute-0 sudo[202789]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:32 compute-0 sudo[202877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aemjrmhhqolyratlxpucqymiarxtxtis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531532.352226-782-254242966334344/AnsiballZ_file.py'
Feb 19 20:05:32 compute-0 sudo[202877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:32 compute-0 podman[202842]: 2026-02-19 20:05:32.941929283 +0000 UTC m=+0.040991753 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 19 20:05:33 compute-0 python3.9[202891]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json _original_basename=.2fv3sdtm recurse=False state=file path=/var/lib/kolla/config_files/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:05:33 compute-0 sudo[202877]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:33 compute-0 python3.9[203041]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/podman_exporter state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:05:35 compute-0 sudo[203462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifhzyrvtmsevchorxsvanxrnmgucmpun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531535.2543504-819-199131954863224/AnsiballZ_container_config_data.py'
Feb 19 20:05:35 compute-0 sudo[203462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:35 compute-0 python3.9[203465]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/podman_exporter config_pattern=*.json debug=False
Feb 19 20:05:35 compute-0 sudo[203462]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:36 compute-0 sudo[203615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpclhajryolwlqqrzajyklathlywovcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531536.0231616-830-29499819042051/AnsiballZ_container_config_hash.py'
Feb 19 20:05:36 compute-0 sudo[203615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:36 compute-0 python3.9[203618]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 19 20:05:36 compute-0 sudo[203615]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:36 compute-0 sudo[203768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exojafpilsrsevmczmrekqhjeaqolgvi ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771531536.7658343-840-207496723553274/AnsiballZ_edpm_container_manage.py'
Feb 19 20:05:36 compute-0 sudo[203768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:37 compute-0 python3[203771]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/podman_exporter config_id=podman_exporter config_overrides={} config_patterns=*.json containers=['podman_exporter'] log_base_path=/var/log/containers/stdouts debug=False
Feb 19 20:05:38 compute-0 podman[203783]: 2026-02-19 20:05:38.346691144 +0000 UTC m=+1.093597787 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Feb 19 20:05:38 compute-0 podman[203880]: 2026-02-19 20:05:38.453754286 +0000 UTC m=+0.043329036 container create 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, config_id=podman_exporter, container_name=podman_exporter, managed_by=edpm_ansible)
Feb 19 20:05:38 compute-0 podman[203880]: 2026-02-19 20:05:38.43070824 +0000 UTC m=+0.020283010 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Feb 19 20:05:38 compute-0 python3[203771]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env CONTAINER_HOST=unix:///run/podman/podman.sock --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535 --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=podman_exporter --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Feb 19 20:05:38 compute-0 sudo[203768]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:38 compute-0 sudo[204068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaqrhkxjjrrrxacxbuiqarbsqjxhraat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531538.7141483-848-176164329141217/AnsiballZ_stat.py'
Feb 19 20:05:38 compute-0 sudo[204068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:39 compute-0 python3.9[204071]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 20:05:39 compute-0 sudo[204068]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:39 compute-0 sudo[204223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyofgecmysjelawvvzqmbpjtttpzexol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531539.3941047-857-144843773684933/AnsiballZ_file.py'
Feb 19 20:05:39 compute-0 sudo[204223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:39 compute-0 python3.9[204226]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:05:39 compute-0 sudo[204223]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:39 compute-0 sudo[204300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwumwuvciqifhheuydmswpsvtqmahmdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531539.3941047-857-144843773684933/AnsiballZ_stat.py'
Feb 19 20:05:39 compute-0 sudo[204300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:40 compute-0 python3.9[204303]: ansible-stat Invoked with path=/etc/systemd/system/edpm_podman_exporter_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 20:05:40 compute-0 sudo[204300]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:40 compute-0 sudo[204452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwjxdsieldofxrzxyusndeyjayeblnri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531540.2144322-857-141636166002402/AnsiballZ_copy.py'
Feb 19 20:05:40 compute-0 sudo[204452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:40 compute-0 python3.9[204455]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771531540.2144322-857-141636166002402/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:05:40 compute-0 sudo[204452]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:41 compute-0 sudo[204529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrxbwbaktashaaqjemytpbybzklmagqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531540.2144322-857-141636166002402/AnsiballZ_systemd.py'
Feb 19 20:05:41 compute-0 sudo[204529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:41 compute-0 python3.9[204532]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 19 20:05:41 compute-0 systemd[1]: Reloading.
Feb 19 20:05:41 compute-0 systemd-rc-local-generator[204552]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:05:41 compute-0 systemd-sysv-generator[204557]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:05:41 compute-0 sudo[204529]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:42 compute-0 sudo[204648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cascdutkjpmsuankyzipxwmfmlwpxifn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531540.2144322-857-141636166002402/AnsiballZ_systemd.py'
Feb 19 20:05:42 compute-0 sudo[204648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:42 compute-0 python3.9[204651]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 20:05:42 compute-0 systemd[1]: Reloading.
Feb 19 20:05:42 compute-0 systemd-rc-local-generator[204674]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:05:42 compute-0 systemd-sysv-generator[204679]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:05:42 compute-0 systemd[1]: Starting podman_exporter container...
Feb 19 20:05:42 compute-0 systemd[1]: Started libcrun container.
Feb 19 20:05:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57d72a5d709d047256f3d34ee9b4f245454bacd14cbaa94bbd3a478dd0e00195/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Feb 19 20:05:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57d72a5d709d047256f3d34ee9b4f245454bacd14cbaa94bbd3a478dd0e00195/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Feb 19 20:05:42 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2.
Feb 19 20:05:42 compute-0 podman[204697]: 2026-02-19 20:05:42.751049725 +0000 UTC m=+0.108175998 container init 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 19 20:05:42 compute-0 podman_exporter[204712]: ts=2026-02-19T20:05:42.765Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Feb 19 20:05:42 compute-0 podman_exporter[204712]: ts=2026-02-19T20:05:42.765Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Feb 19 20:05:42 compute-0 podman_exporter[204712]: ts=2026-02-19T20:05:42.765Z caller=handler.go:94 level=info msg="enabled collectors"
Feb 19 20:05:42 compute-0 podman_exporter[204712]: ts=2026-02-19T20:05:42.765Z caller=handler.go:105 level=info collector=container
Feb 19 20:05:42 compute-0 podman[204697]: 2026-02-19 20:05:42.783472686 +0000 UTC m=+0.140598939 container start 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 19 20:05:42 compute-0 podman[204697]: podman_exporter
Feb 19 20:05:42 compute-0 systemd[1]: Starting Podman API Service...
Feb 19 20:05:42 compute-0 systemd[1]: Started Podman API Service.
Feb 19 20:05:42 compute-0 systemd[1]: Started podman_exporter container.
Feb 19 20:05:42 compute-0 podman[204724]: time="2026-02-19T20:05:42Z" level=info msg="/usr/bin/podman filtering at log level info"
Feb 19 20:05:42 compute-0 podman[204724]: time="2026-02-19T20:05:42Z" level=info msg="Setting parallel job count to 25"
Feb 19 20:05:42 compute-0 podman[204724]: time="2026-02-19T20:05:42Z" level=info msg="Using sqlite as database backend"
Feb 19 20:05:42 compute-0 podman[204724]: time="2026-02-19T20:05:42Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Feb 19 20:05:42 compute-0 podman[204724]: time="2026-02-19T20:05:42Z" level=info msg="Using systemd socket activation to determine API endpoint"
Feb 19 20:05:42 compute-0 podman[204724]: time="2026-02-19T20:05:42Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Feb 19 20:05:42 compute-0 sudo[204648]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:42 compute-0 podman[204724]: @ - - [19/Feb/2026:20:05:42 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Feb 19 20:05:42 compute-0 podman[204724]: time="2026-02-19T20:05:42Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:05:42 compute-0 podman[204721]: 2026-02-19 20:05:42.839091529 +0000 UTC m=+0.049235513 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=starting, health_failing_streak=1, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 19 20:05:42 compute-0 podman[204724]: @ - - [19/Feb/2026:20:05:42 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 18632 "" "Go-http-client/1.1"
Feb 19 20:05:42 compute-0 podman_exporter[204712]: ts=2026-02-19T20:05:42.842Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Feb 19 20:05:42 compute-0 systemd[1]: 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2-577485b4d3851e8b.service: Main process exited, code=exited, status=1/FAILURE
Feb 19 20:05:42 compute-0 systemd[1]: 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2-577485b4d3851e8b.service: Failed with result 'exit-code'.
Feb 19 20:05:42 compute-0 podman_exporter[204712]: ts=2026-02-19T20:05:42.843Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Feb 19 20:05:42 compute-0 podman_exporter[204712]: ts=2026-02-19T20:05:42.843Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Feb 19 20:05:43 compute-0 python3.9[204908]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 19 20:05:44 compute-0 sudo[205060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niejunhtcvnswtvspbodsglrmxogjusj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531543.828458-902-92252686064526/AnsiballZ_stat.py'
Feb 19 20:05:44 compute-0 sudo[205060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:44 compute-0 podman[205062]: 2026-02-19 20:05:44.148294746 +0000 UTC m=+0.055783718 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=2, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, managed_by=edpm_ansible)
Feb 19 20:05:44 compute-0 systemd[1]: 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a-585e0d62bfaa16db.service: Main process exited, code=exited, status=1/FAILURE
Feb 19 20:05:44 compute-0 systemd[1]: 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a-585e0d62bfaa16db.service: Failed with result 'exit-code'.
Feb 19 20:05:44 compute-0 python3.9[205064]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:05:44 compute-0 sshd-session[204909]: Invalid user bitnami from 158.174.210.161 port 43774
Feb 19 20:05:44 compute-0 sudo[205060]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:44 compute-0 sshd-session[204909]: Received disconnect from 158.174.210.161 port 43774:11: Bye Bye [preauth]
Feb 19 20:05:44 compute-0 sshd-session[204909]: Disconnected from invalid user bitnami 158.174.210.161 port 43774 [preauth]
Feb 19 20:05:44 compute-0 sudo[205205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whgjyyzmfyhbqywvvxosrutbtznxuxpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531543.828458-902-92252686064526/AnsiballZ_copy.py'
Feb 19 20:05:44 compute-0 sudo[205205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:45 compute-0 python3.9[205208]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771531543.828458-902-92252686064526/.source.yaml _original_basename=.k70ofzzs follow=False checksum=01dc033d0a4074ce0542618b6be2116997a406ed backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:05:45 compute-0 sudo[205205]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:45 compute-0 sudo[205358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjqacfpvimpdnjoebjjgpuybltpejojx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531545.2442272-917-258683584865048/AnsiballZ_stat.py'
Feb 19 20:05:45 compute-0 sudo[205358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:45 compute-0 python3.9[205361]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:05:45 compute-0 sudo[205358]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:45 compute-0 sudo[205482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihndrpssmdcuypadgujofaxcubeevwju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531545.2442272-917-258683584865048/AnsiballZ_copy.py'
Feb 19 20:05:45 compute-0 sudo[205482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:46 compute-0 python3.9[205485]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771531545.2442272-917-258683584865048/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:05:46 compute-0 sudo[205482]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:46 compute-0 sudo[205648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhxnzsvnnkncxipytyuiswkhcfklxyis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531546.7442174-938-204736131599490/AnsiballZ_file.py'
Feb 19 20:05:46 compute-0 sudo[205648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:46 compute-0 podman[205609]: 2026-02-19 20:05:46.996108748 +0000 UTC m=+0.065586977 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 20:05:47 compute-0 python3.9[205658]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:05:47 compute-0 sudo[205648]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:47 compute-0 sudo[205814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aybndlksmzfaoildrvfcrffxufrvkzce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531547.3258414-946-72659949833884/AnsiballZ_file.py'
Feb 19 20:05:47 compute-0 sudo[205814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:47 compute-0 python3.9[205817]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:05:47 compute-0 sudo[205814]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:48 compute-0 sudo[205967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sahozidkmaeljqwbqkfocbkvsdzcbzce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531547.9151368-954-244073793155564/AnsiballZ_stat.py'
Feb 19 20:05:48 compute-0 sudo[205967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:48 compute-0 python3.9[205970]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:05:48 compute-0 sudo[205967]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:48 compute-0 sudo[206046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugkmaqstigxlofkcpqzpmqazfqjvtelw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531547.9151368-954-244073793155564/AnsiballZ_file.py'
Feb 19 20:05:48 compute-0 sudo[206046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:48 compute-0 python3.9[206049]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json _original_basename=.sj28i6jh recurse=False state=file path=/var/lib/kolla/config_files/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:05:48 compute-0 sudo[206046]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:49 compute-0 python3.9[206199]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/openstack_network_exporter state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:05:51 compute-0 sudo[206620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcasfrzwckdvtrvdvxyxwutnngaguyri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531550.9473605-991-179303194854066/AnsiballZ_container_config_data.py'
Feb 19 20:05:51 compute-0 sudo[206620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:51 compute-0 python3.9[206623]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/openstack_network_exporter config_pattern=*.json debug=False
Feb 19 20:05:51 compute-0 sudo[206620]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:51 compute-0 sudo[206773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgwarsnltpumumhuzfcpgxkpdygvioqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531551.7823303-1002-38911163403511/AnsiballZ_container_config_hash.py'
Feb 19 20:05:51 compute-0 sudo[206773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:52 compute-0 python3.9[206776]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 19 20:05:52 compute-0 sudo[206773]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:52 compute-0 sudo[206926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skvflzpqnmsmkiyowwjfktcvffwicarg ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771531552.4663022-1012-69241929346974/AnsiballZ_edpm_container_manage.py'
Feb 19 20:05:52 compute-0 sudo[206926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:52 compute-0 python3[206929]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/openstack_network_exporter config_id=openstack_network_exporter config_overrides={} config_patterns=*.json containers=['openstack_network_exporter'] log_base_path=/var/log/containers/stdouts debug=False
Feb 19 20:05:55 compute-0 podman[206943]: 2026-02-19 20:05:55.331405888 +0000 UTC m=+2.380928317 image pull ba17a9079b86bdce2cd9e03cad5d6d4d255cc298efd741b09239c34192e5621b quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Feb 19 20:05:55 compute-0 podman[207040]: 2026-02-19 20:05:55.503505999 +0000 UTC m=+0.091233875 container create 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-type=git, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, release=1770267347, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.created=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=ubi9/ubi-minimal, version=9.7, io.openshift.expose-services=)
Feb 19 20:05:55 compute-0 podman[207040]: 2026-02-19 20:05:55.43148105 +0000 UTC m=+0.019208956 image pull ba17a9079b86bdce2cd9e03cad5d6d4d255cc298efd741b09239c34192e5621b quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Feb 19 20:05:55 compute-0 python3[206929]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535 --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=openstack_network_exporter --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Feb 19 20:05:55 compute-0 sudo[206926]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:55 compute-0 sudo[207228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elfmokifjixxmuqagctgfiramnzlqkch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531555.7466784-1020-263882295978754/AnsiballZ_stat.py'
Feb 19 20:05:55 compute-0 sudo[207228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:56 compute-0 python3.9[207231]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 20:05:56 compute-0 sudo[207228]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:56 compute-0 sudo[207383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksgjpatuvviioyugjkjfpwreigarxcof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531556.4964664-1029-12113114362115/AnsiballZ_file.py'
Feb 19 20:05:56 compute-0 sudo[207383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:56 compute-0 python3.9[207386]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:05:56 compute-0 sudo[207383]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:57 compute-0 sudo[207460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqzupzufmhwgdekhjjwwmafigwiczxgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531556.4964664-1029-12113114362115/AnsiballZ_stat.py'
Feb 19 20:05:57 compute-0 sudo[207460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:57 compute-0 python3.9[207463]: ansible-stat Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 20:05:57 compute-0 sudo[207460]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:57 compute-0 sudo[207625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dobhemrddnnmwwsbdwttmqxwjuhicjpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531557.5712013-1029-29950447400382/AnsiballZ_copy.py'
Feb 19 20:05:57 compute-0 sudo[207625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:57 compute-0 podman[207586]: 2026-02-19 20:05:57.979985764 +0000 UTC m=+0.051288496 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 19 20:05:58 compute-0 python3.9[207639]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771531557.5712013-1029-29950447400382/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:05:58 compute-0 sudo[207625]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:58 compute-0 sudo[207713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hovtztjelzrsiawcrzsmuowjhichqwlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531557.5712013-1029-29950447400382/AnsiballZ_systemd.py'
Feb 19 20:05:58 compute-0 sudo[207713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:58 compute-0 python3.9[207716]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 19 20:05:58 compute-0 systemd[1]: Reloading.
Feb 19 20:05:58 compute-0 systemd-rc-local-generator[207741]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:05:58 compute-0 systemd-sysv-generator[207744]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:05:59 compute-0 sudo[207713]: pam_unix(sudo:session): session closed for user root
Feb 19 20:05:59 compute-0 sudo[207832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlzhycnwofdhayjqmcocceczukiazhdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531557.5712013-1029-29950447400382/AnsiballZ_systemd.py'
Feb 19 20:05:59 compute-0 sudo[207832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:05:59 compute-0 python3.9[207835]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 20:05:59 compute-0 systemd[1]: Reloading.
Feb 19 20:05:59 compute-0 systemd-rc-local-generator[207864]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:05:59 compute-0 systemd-sysv-generator[207868]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:05:59 compute-0 systemd[1]: Starting openstack_network_exporter container...
Feb 19 20:05:59 compute-0 systemd[1]: Started libcrun container.
Feb 19 20:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/184db8543ea61ea2bb4ea1eb902feb8a1ee22d5373a52a579f1d8ce87776d228/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Feb 19 20:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/184db8543ea61ea2bb4ea1eb902feb8a1ee22d5373a52a579f1d8ce87776d228/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Feb 19 20:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/184db8543ea61ea2bb4ea1eb902feb8a1ee22d5373a52a579f1d8ce87776d228/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Feb 19 20:05:59 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692.
Feb 19 20:05:59 compute-0 podman[207882]: 2026-02-19 20:05:59.99388281 +0000 UTC m=+0.122004565 container init 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, build-date=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.7, io.openshift.expose-services=, maintainer=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9)
Feb 19 20:06:00 compute-0 openstack_network_exporter[207898]: INFO    20:06:00 main.go:48: registering *bridge.Collector
Feb 19 20:06:00 compute-0 openstack_network_exporter[207898]: INFO    20:06:00 main.go:48: registering *coverage.Collector
Feb 19 20:06:00 compute-0 openstack_network_exporter[207898]: INFO    20:06:00 main.go:48: registering *datapath.Collector
Feb 19 20:06:00 compute-0 openstack_network_exporter[207898]: INFO    20:06:00 main.go:48: registering *iface.Collector
Feb 19 20:06:00 compute-0 openstack_network_exporter[207898]: INFO    20:06:00 main.go:48: registering *memory.Collector
Feb 19 20:06:00 compute-0 openstack_network_exporter[207898]: INFO    20:06:00 main.go:55: *ovnnorthd.Collector not registered, metric set not enabled
Feb 19 20:06:00 compute-0 openstack_network_exporter[207898]: INFO    20:06:00 main.go:48: registering *ovn.Collector
Feb 19 20:06:00 compute-0 openstack_network_exporter[207898]: INFO    20:06:00 main.go:55: *ovsdbserver.Collector not registered, metric set not enabled
Feb 19 20:06:00 compute-0 openstack_network_exporter[207898]: INFO    20:06:00 main.go:48: registering *pmd_perf.Collector
Feb 19 20:06:00 compute-0 openstack_network_exporter[207898]: INFO    20:06:00 main.go:48: registering *pmd_rxq.Collector
Feb 19 20:06:00 compute-0 openstack_network_exporter[207898]: INFO    20:06:00 main.go:48: registering *vswitch.Collector
Feb 19 20:06:00 compute-0 openstack_network_exporter[207898]: NOTICE  20:06:00 main.go:76: listening on https://:9105/metrics
Feb 19 20:06:00 compute-0 podman[207882]: 2026-02-19 20:06:00.023804322 +0000 UTC m=+0.151926027 container start 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, vendor=Red Hat, Inc., name=ubi9/ubi-minimal, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Feb 19 20:06:00 compute-0 podman[207882]: openstack_network_exporter
Feb 19 20:06:00 compute-0 systemd[1]: Started openstack_network_exporter container.
Feb 19 20:06:00 compute-0 sudo[207832]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:00 compute-0 podman[207908]: 2026-02-19 20:06:00.135054406 +0000 UTC m=+0.098693939 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, build-date=2026-02-05T04:57:10Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9/ubi-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c)
Feb 19 20:06:00 compute-0 python3.9[208082]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 19 20:06:01 compute-0 sudo[208232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szpdvfnakeksmwcjlxfpvxkqnljbjase ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531561.2014353-1074-168494368729858/AnsiballZ_stat.py'
Feb 19 20:06:01 compute-0 sudo[208232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:01 compute-0 python3.9[208235]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:06:01 compute-0 sshd-session[207955]: Received disconnect from 125.31.2.160 port 46024:11: Bye Bye [preauth]
Feb 19 20:06:01 compute-0 sshd-session[207955]: Disconnected from authenticating user root 125.31.2.160 port 46024 [preauth]
Feb 19 20:06:01 compute-0 sudo[208232]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:01 compute-0 sudo[208358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymldzvwdgyeptskuofuaqlcsrwnpijcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531561.2014353-1074-168494368729858/AnsiballZ_copy.py'
Feb 19 20:06:01 compute-0 sudo[208358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:02 compute-0 python3.9[208361]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771531561.2014353-1074-168494368729858/.source.yaml _original_basename=.tps3p125 follow=False checksum=99c792d7c8c341a296c8a195eb430509c9953000 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:06:02 compute-0 sudo[208358]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:02 compute-0 sudo[208511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmceqdfhglfwbdwfdidkpgpbxnslppur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531562.290701-1089-230121604880359/AnsiballZ_find.py'
Feb 19 20:06:02 compute-0 sudo[208511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:02 compute-0 python3.9[208514]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 19 20:06:02 compute-0 sudo[208511]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:03 compute-0 podman[208591]: 2026-02-19 20:06:03.381642979 +0000 UTC m=+0.056880013 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Feb 19 20:06:03 compute-0 sudo[208683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwhqlctxtqafgqvpnfbmagnpcsxbzqdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531563.1037467-1099-47225282212296/AnsiballZ_podman_container_info.py'
Feb 19 20:06:03 compute-0 sudo[208683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:03 compute-0 python3.9[208686]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Feb 19 20:06:03 compute-0 sudo[208683]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:04 compute-0 sudo[208849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwwsctaieyrsnnzzdmajxlgnkpklgbdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531563.883552-1107-170058405137261/AnsiballZ_podman_container_exec.py'
Feb 19 20:06:04 compute-0 sudo[208849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:04 compute-0 python3.9[208852]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 19 20:06:04 compute-0 systemd[1]: Started libpod-conmon-626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0.scope.
Feb 19 20:06:04 compute-0 podman[208853]: 2026-02-19 20:06:04.498774967 +0000 UTC m=+0.062008614 container exec 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 19 20:06:04 compute-0 podman[208872]: 2026-02-19 20:06:04.55633986 +0000 UTC m=+0.048131507 container exec_died 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20260127, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Feb 19 20:06:04 compute-0 podman[208853]: 2026-02-19 20:06:04.561789772 +0000 UTC m=+0.125023419 container exec_died 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 19 20:06:04 compute-0 systemd[1]: libpod-conmon-626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0.scope: Deactivated successfully.
Feb 19 20:06:04 compute-0 sudo[208849]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:04 compute-0 sudo[209034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykzbfjfnwflxxfpsqladbenlysvutpfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531564.7174249-1115-10091284853517/AnsiballZ_podman_container_exec.py'
Feb 19 20:06:04 compute-0 sudo[209034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:05 compute-0 python3.9[209037]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 19 20:06:05 compute-0 systemd[1]: Started libpod-conmon-626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0.scope.
Feb 19 20:06:05 compute-0 podman[209038]: 2026-02-19 20:06:05.265325422 +0000 UTC m=+0.127565609 container exec 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 19 20:06:05 compute-0 podman[209058]: 2026-02-19 20:06:05.323330839 +0000 UTC m=+0.049988015 container exec_died 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:06:05 compute-0 podman[209038]: 2026-02-19 20:06:05.340670495 +0000 UTC m=+0.202910662 container exec_died 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 19 20:06:05 compute-0 systemd[1]: libpod-conmon-626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0.scope: Deactivated successfully.
Feb 19 20:06:05 compute-0 sudo[209034]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:05 compute-0 sudo[209220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efgugtygzongcztkgqlqxgbnbtqhwqkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531565.5183685-1123-234245360191386/AnsiballZ_file.py'
Feb 19 20:06:05 compute-0 sudo[209220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:05 compute-0 python3.9[209223]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:06:05 compute-0 sudo[209220]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:06 compute-0 sudo[209373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgqsmzlikfzomdgkswrelaahqvfzdqlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531566.2269607-1132-73353401796539/AnsiballZ_podman_container_info.py'
Feb 19 20:06:06 compute-0 sudo[209373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:06 compute-0 python3.9[209376]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Feb 19 20:06:06 compute-0 sudo[209373]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:07 compute-0 sudo[209539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlwlhpvjhiitjsplbxjdcsrxrjsjgdch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531566.8290105-1140-279835735732177/AnsiballZ_podman_container_exec.py'
Feb 19 20:06:07 compute-0 sudo[209539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:07 compute-0 python3.9[209542]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 19 20:06:07 compute-0 systemd[1]: Started libpod-conmon-59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e.scope.
Feb 19 20:06:07 compute-0 podman[209543]: 2026-02-19 20:06:07.353051452 +0000 UTC m=+0.102900732 container exec 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 19 20:06:07 compute-0 podman[209562]: 2026-02-19 20:06:07.420345632 +0000 UTC m=+0.057881934 container exec_died 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127)
Feb 19 20:06:07 compute-0 podman[209543]: 2026-02-19 20:06:07.425438932 +0000 UTC m=+0.175288252 container exec_died 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 19 20:06:07 compute-0 systemd[1]: libpod-conmon-59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e.scope: Deactivated successfully.
Feb 19 20:06:07 compute-0 sudo[209539]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:07 compute-0 sudo[209725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqliajbfjigqhdcpdhocititnwnfkavg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531567.5984516-1148-23034281978974/AnsiballZ_podman_container_exec.py'
Feb 19 20:06:07 compute-0 sudo[209725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:08 compute-0 python3.9[209728]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 19 20:06:08 compute-0 systemd[1]: Started libpod-conmon-59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e.scope.
Feb 19 20:06:08 compute-0 podman[209729]: 2026-02-19 20:06:08.096274713 +0000 UTC m=+0.064624317 container exec 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 19 20:06:08 compute-0 podman[209729]: 2026-02-19 20:06:08.126715332 +0000 UTC m=+0.095064906 container exec_died 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 19 20:06:08 compute-0 systemd[1]: libpod-conmon-59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e.scope: Deactivated successfully.
Feb 19 20:06:08 compute-0 sudo[209725]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:08 compute-0 sudo[209908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aypovskqoikupvajktgpuhhrgstjxwrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531568.282912-1156-267075519809512/AnsiballZ_file.py'
Feb 19 20:06:08 compute-0 sudo[209908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:08 compute-0 python3.9[209911]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:06:08 compute-0 sudo[209908]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:09 compute-0 sudo[210061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuxohnsnpmovyenjahnomzvhlrgivxmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531568.8770123-1165-215201159453796/AnsiballZ_podman_container_info.py'
Feb 19 20:06:09 compute-0 sudo[210061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:09 compute-0 python3.9[210064]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Feb 19 20:06:09 compute-0 sudo[210061]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:09 compute-0 sudo[210227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuuupiyzadlzdmkzfeintucuyvfnytfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531569.4703808-1173-35779121728529/AnsiballZ_podman_container_exec.py'
Feb 19 20:06:09 compute-0 sudo[210227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:09 compute-0 python3.9[210230]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 19 20:06:09 compute-0 systemd[1]: Started libpod-conmon-7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a.scope.
Feb 19 20:06:09 compute-0 podman[210231]: 2026-02-19 20:06:09.97212179 +0000 UTC m=+0.059229757 container exec 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 19 20:06:10 compute-0 podman[210231]: 2026-02-19 20:06:10.003315302 +0000 UTC m=+0.090423249 container exec_died 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 19 20:06:10 compute-0 systemd[1]: libpod-conmon-7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a.scope: Deactivated successfully.
Feb 19 20:06:10 compute-0 sudo[210227]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:10 compute-0 sudo[210413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcghynvcxxcltgccanezitcruvxjtsqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531570.283777-1181-1521396930150/AnsiballZ_podman_container_exec.py'
Feb 19 20:06:10 compute-0 sudo[210413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:10 compute-0 python3.9[210416]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 19 20:06:10 compute-0 systemd[1]: Started libpod-conmon-7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a.scope.
Feb 19 20:06:10 compute-0 podman[210417]: 2026-02-19 20:06:10.801340758 +0000 UTC m=+0.056602513 container exec 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216)
Feb 19 20:06:10 compute-0 podman[210417]: 2026-02-19 20:06:10.83756624 +0000 UTC m=+0.092827995 container exec_died 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260216, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 10 Base Image)
Feb 19 20:06:10 compute-0 systemd[1]: libpod-conmon-7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a.scope: Deactivated successfully.
Feb 19 20:06:10 compute-0 sudo[210413]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:11 compute-0 sudo[210599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdkylqeyasoxvqnhqbrkboghxogmgpom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531571.0231097-1189-84198583866728/AnsiballZ_file.py'
Feb 19 20:06:11 compute-0 sudo[210599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:11 compute-0 python3.9[210602]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:06:11 compute-0 sudo[210599]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:11 compute-0 sudo[210752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikbpnlngukxyfrkcxrrncbpcyccxfrpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531571.6634986-1198-162499541213866/AnsiballZ_podman_container_info.py'
Feb 19 20:06:11 compute-0 sudo[210752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:12 compute-0 python3.9[210755]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Feb 19 20:06:12 compute-0 sudo[210752]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:12 compute-0 sudo[210918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxwxryqsnlfdvawvlbgtuxsbeoxflyry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531572.31757-1206-193438655251775/AnsiballZ_podman_container_exec.py'
Feb 19 20:06:12 compute-0 sudo[210918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:12 compute-0 python3.9[210921]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 19 20:06:12 compute-0 systemd[1]: Started libpod-conmon-fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad.scope.
Feb 19 20:06:12 compute-0 podman[210922]: 2026-02-19 20:06:12.800782278 +0000 UTC m=+0.064466842 container exec fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 19 20:06:12 compute-0 podman[210942]: 2026-02-19 20:06:12.86242846 +0000 UTC m=+0.052073591 container exec_died fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 19 20:06:12 compute-0 podman[210922]: 2026-02-19 20:06:12.867300454 +0000 UTC m=+0.130985018 container exec_died fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 19 20:06:12 compute-0 systemd[1]: libpod-conmon-fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad.scope: Deactivated successfully.
Feb 19 20:06:12 compute-0 sudo[210918]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:12 compute-0 podman[210953]: 2026-02-19 20:06:12.943814493 +0000 UTC m=+0.054046953 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 19 20:06:13 compute-0 sudo[211126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdhgycmnmbycikomichpkupnjnkzzqtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531573.0617814-1214-105703602666452/AnsiballZ_podman_container_exec.py'
Feb 19 20:06:13 compute-0 sudo[211126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:13 compute-0 python3.9[211129]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 19 20:06:13 compute-0 systemd[1]: Started libpod-conmon-fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad.scope.
Feb 19 20:06:13 compute-0 podman[211130]: 2026-02-19 20:06:13.588029375 +0000 UTC m=+0.080852737 container exec fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 19 20:06:13 compute-0 podman[211130]: 2026-02-19 20:06:13.622627855 +0000 UTC m=+0.115451137 container exec_died fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 19 20:06:13 compute-0 systemd[1]: libpod-conmon-fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad.scope: Deactivated successfully.
Feb 19 20:06:13 compute-0 sudo[211126]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:14 compute-0 sudo[211310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebwkmdoxigcfcpewzmrijrhjrbynliif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531573.852185-1222-272897447222711/AnsiballZ_file.py'
Feb 19 20:06:14 compute-0 sudo[211310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:14 compute-0 python3.9[211313]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:06:14 compute-0 sudo[211310]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:14 compute-0 podman[211314]: 2026-02-19 20:06:14.362087467 +0000 UTC m=+0.054109515 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:06:14 compute-0 sudo[211485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iiashihweqyxcvbublvmgjygzgkmznuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531574.5998428-1231-275689182744653/AnsiballZ_podman_container_info.py'
Feb 19 20:06:14 compute-0 sudo[211485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:15 compute-0 python3.9[211488]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Feb 19 20:06:15 compute-0 sudo[211485]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:15 compute-0 sudo[211651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkwvgqgdjssfhhvnwqhttydpsodyibpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531575.2578225-1239-45094195708242/AnsiballZ_podman_container_exec.py'
Feb 19 20:06:15 compute-0 sudo[211651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:15 compute-0 python3.9[211654]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 19 20:06:15 compute-0 systemd[1]: Started libpod-conmon-9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2.scope.
Feb 19 20:06:15 compute-0 podman[211655]: 2026-02-19 20:06:15.802366244 +0000 UTC m=+0.065326379 container exec 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 19 20:06:15 compute-0 podman[211675]: 2026-02-19 20:06:15.86036633 +0000 UTC m=+0.050635396 container exec_died 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 19 20:06:15 compute-0 podman[211655]: 2026-02-19 20:06:15.865853853 +0000 UTC m=+0.128813908 container exec_died 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 19 20:06:15 compute-0 systemd[1]: libpod-conmon-9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2.scope: Deactivated successfully.
Feb 19 20:06:15 compute-0 sudo[211651]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:16 compute-0 sudo[211837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vliqhqabcobqtfcmizdztxpjoglmtdnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531576.054484-1247-80099100512872/AnsiballZ_podman_container_exec.py'
Feb 19 20:06:16 compute-0 sudo[211837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:16 compute-0 python3.9[211840]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 19 20:06:16 compute-0 systemd[1]: Started libpod-conmon-9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2.scope.
Feb 19 20:06:16 compute-0 podman[211841]: 2026-02-19 20:06:16.580244876 +0000 UTC m=+0.084742150 container exec 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 19 20:06:16 compute-0 podman[211841]: 2026-02-19 20:06:16.609492457 +0000 UTC m=+0.113989721 container exec_died 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 19 20:06:16 compute-0 systemd[1]: libpod-conmon-9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2.scope: Deactivated successfully.
Feb 19 20:06:16 compute-0 sudo[211837]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:16 compute-0 sudo[212023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptpazkzgyvincgoyntfbbgrpudnmadri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531576.7720072-1255-150110206796460/AnsiballZ_file.py'
Feb 19 20:06:16 compute-0 sudo[212023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:17 compute-0 python3.9[212026]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:06:17 compute-0 sudo[212023]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:17 compute-0 podman[212027]: 2026-02-19 20:06:17.284242441 +0000 UTC m=+0.080106625 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 19 20:06:17 compute-0 sudo[212202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doofcnqnclwvmipajcpvejtxdrewvqyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531577.3389094-1264-6600811728995/AnsiballZ_podman_container_info.py'
Feb 19 20:06:17 compute-0 sudo[212202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:17 compute-0 python3.9[212205]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Feb 19 20:06:17 compute-0 sudo[212202]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:18 compute-0 sudo[212368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pndkjgzkgvtvzguoeqgutdvprwuywlso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531577.9736686-1272-154416571223891/AnsiballZ_podman_container_exec.py'
Feb 19 20:06:18 compute-0 sudo[212368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:18 compute-0 python3.9[212371]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 19 20:06:18 compute-0 systemd[1]: Started libpod-conmon-3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692.scope.
Feb 19 20:06:18 compute-0 podman[212372]: 2026-02-19 20:06:18.45062824 +0000 UTC m=+0.065231646 container exec 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., managed_by=edpm_ansible, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9/ubi-minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-minimal-container, version=9.7, build-date=2026-02-05T04:57:10Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Feb 19 20:06:18 compute-0 podman[212372]: 2026-02-19 20:06:18.484498417 +0000 UTC m=+0.099101823 container exec_died 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-05T04:57:10Z, distribution-scope=public, name=ubi9/ubi-minimal, managed_by=edpm_ansible, version=9.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9)
Feb 19 20:06:18 compute-0 systemd[1]: libpod-conmon-3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692.scope: Deactivated successfully.
Feb 19 20:06:18 compute-0 sudo[212368]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:18 compute-0 sudo[212552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvcnvlzgymvtwvcrdnpplautzupvrokq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531578.6471682-1280-253936120593310/AnsiballZ_podman_container_exec.py'
Feb 19 20:06:18 compute-0 sudo[212552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:19 compute-0 python3.9[212555]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 19 20:06:19 compute-0 systemd[1]: Started libpod-conmon-3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692.scope.
Feb 19 20:06:19 compute-0 podman[212556]: 2026-02-19 20:06:19.107675616 +0000 UTC m=+0.058862516 container exec 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, container_name=openstack_network_exporter, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2026-02-05T04:57:10Z, distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., org.opencontainers.image.created=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, name=ubi9/ubi-minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, release=1770267347, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers)
Feb 19 20:06:19 compute-0 podman[212576]: 2026-02-19 20:06:19.167424368 +0000 UTC m=+0.050914255 container exec_died 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, maintainer=Red Hat, Inc., vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1770267347, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.33.7, name=ubi9/ubi-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.created=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, version=9.7, build-date=2026-02-05T04:57:10Z)
Feb 19 20:06:19 compute-0 podman[212556]: 2026-02-19 20:06:19.17258679 +0000 UTC m=+0.123773660 container exec_died 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, release=1770267347, distribution-scope=public, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, managed_by=edpm_ansible, name=ubi9/ubi-minimal, io.openshift.expose-services=, build-date=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Feb 19 20:06:19 compute-0 systemd[1]: libpod-conmon-3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692.scope: Deactivated successfully.
Feb 19 20:06:19 compute-0 sudo[212552]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:19 compute-0 sudo[212738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwmemamxtjdzbjabnetxjsimjxylniin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531579.4249856-1288-218909142035060/AnsiballZ_file.py'
Feb 19 20:06:19 compute-0 sudo[212738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:19 compute-0 python3.9[212741]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:06:19 compute-0 sudo[212738]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:20 compute-0 sudo[212891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svtryhzaurypxjimbzsnlteuzdgbyhkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531580.1702654-1297-120420622567277/AnsiballZ_file.py'
Feb 19 20:06:20 compute-0 sudo[212891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:20 compute-0 python3.9[212894]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:06:20 compute-0 sudo[212891]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:21 compute-0 sudo[213044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnkwlhdncfmknwhpdtocnqyxyqyvrwjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531580.7730608-1305-36937670000685/AnsiballZ_stat.py'
Feb 19 20:06:21 compute-0 sudo[213044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:21 compute-0 python3.9[213047]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:06:21 compute-0 sudo[213044]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:21 compute-0 sudo[213168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuzgpvktbeebiwllufesbdajezeshstp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531580.7730608-1305-36937670000685/AnsiballZ_copy.py'
Feb 19 20:06:21 compute-0 sudo[213168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:21 compute-0 python3.9[213171]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1771531580.7730608-1305-36937670000685/.source.yaml _original_basename=firewall.yaml follow=False checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:06:21 compute-0 sudo[213168]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:22 compute-0 sudo[213321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfntsdfkusdifimvsontaxioonxwvomk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531581.8868918-1321-227016196623712/AnsiballZ_file.py'
Feb 19 20:06:22 compute-0 sudo[213321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:22 compute-0 python3.9[213324]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:06:22 compute-0 sudo[213321]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:22 compute-0 sudo[213474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flzrjthhqnhgifyxtgilxetbhvvcdjhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531582.4933934-1329-271923053673267/AnsiballZ_stat.py'
Feb 19 20:06:22 compute-0 sudo[213474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:22 compute-0 python3.9[213477]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:06:22 compute-0 sudo[213474]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:23 compute-0 sudo[213553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrzwhekjvgcfpbvegequprvfuericxfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531582.4933934-1329-271923053673267/AnsiballZ_file.py'
Feb 19 20:06:23 compute-0 sudo[213553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:23 compute-0 python3.9[213556]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:06:23 compute-0 sudo[213553]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:23 compute-0 sudo[213706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sezidlymphcnbmrcvuooucfvhxazbtov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531583.4327056-1341-126078212479693/AnsiballZ_stat.py'
Feb 19 20:06:23 compute-0 sudo[213706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:23 compute-0 python3.9[213709]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:06:23 compute-0 sudo[213706]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:24 compute-0 sudo[213785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebxghuaktfdffjaqwgbefqqudfeldvch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531583.4327056-1341-126078212479693/AnsiballZ_file.py'
Feb 19 20:06:24 compute-0 sudo[213785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:24 compute-0 python3.9[213788]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.afiiuv9a recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:06:24 compute-0 sudo[213785]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:24 compute-0 nova_compute[188777]: 2026-02-19 20:06:24.595 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:06:24 compute-0 nova_compute[188777]: 2026-02-19 20:06:24.615 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:06:24 compute-0 nova_compute[188777]: 2026-02-19 20:06:24.615 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:06:24 compute-0 nova_compute[188777]: 2026-02-19 20:06:24.615 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:06:24 compute-0 sudo[213938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxdqdhnggpuncmzkwpfokntwnvxyjztr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531584.4219265-1353-176465322743755/AnsiballZ_stat.py'
Feb 19 20:06:24 compute-0 sudo[213938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:24 compute-0 python3.9[213941]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:06:24 compute-0 sudo[213938]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:25 compute-0 sudo[214017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdmwrvrnnesweyuyfwasnkemxarugwpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531584.4219265-1353-176465322743755/AnsiballZ_file.py'
Feb 19 20:06:25 compute-0 sudo[214017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:25 compute-0 nova_compute[188777]: 2026-02-19 20:06:25.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:06:25 compute-0 nova_compute[188777]: 2026-02-19 20:06:25.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:06:25 compute-0 nova_compute[188777]: 2026-02-19 20:06:25.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:06:25 compute-0 nova_compute[188777]: 2026-02-19 20:06:25.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:06:25 compute-0 nova_compute[188777]: 2026-02-19 20:06:25.286 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:06:25 compute-0 nova_compute[188777]: 2026-02-19 20:06:25.287 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:06:25 compute-0 nova_compute[188777]: 2026-02-19 20:06:25.287 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:06:25 compute-0 nova_compute[188777]: 2026-02-19 20:06:25.287 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:06:25 compute-0 python3.9[214020]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:06:25 compute-0 sudo[214017]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:25 compute-0 nova_compute[188777]: 2026-02-19 20:06:25.442 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:06:25 compute-0 nova_compute[188777]: 2026-02-19 20:06:25.443 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5833MB free_disk=72.26909637451172GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:06:25 compute-0 nova_compute[188777]: 2026-02-19 20:06:25.444 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:06:25 compute-0 nova_compute[188777]: 2026-02-19 20:06:25.444 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:06:25 compute-0 nova_compute[188777]: 2026-02-19 20:06:25.495 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:06:25 compute-0 nova_compute[188777]: 2026-02-19 20:06:25.495 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:06:25 compute-0 nova_compute[188777]: 2026-02-19 20:06:25.518 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:06:25 compute-0 nova_compute[188777]: 2026-02-19 20:06:25.533 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:06:25 compute-0 nova_compute[188777]: 2026-02-19 20:06:25.536 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:06:25 compute-0 nova_compute[188777]: 2026-02-19 20:06:25.536 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.092s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:06:25 compute-0 sudo[214170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huwskcvpjnartavkfymzonjlvmkzdgrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531585.531995-1366-261094592799894/AnsiballZ_command.py'
Feb 19 20:06:25 compute-0 sudo[214170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:25 compute-0 python3.9[214173]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 20:06:26 compute-0 sudo[214170]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:26 compute-0 nova_compute[188777]: 2026-02-19 20:06:26.536 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:06:26 compute-0 nova_compute[188777]: 2026-02-19 20:06:26.537 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:06:26 compute-0 nova_compute[188777]: 2026-02-19 20:06:26.537 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 19 20:06:26 compute-0 sudo[214325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbvumwdqkvmttswgywbhcurnzdtnhneu ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771531586.1572137-1374-252614465848887/AnsiballZ_edpm_nftables_from_files.py'
Feb 19 20:06:26 compute-0 sudo[214325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:26 compute-0 nova_compute[188777]: 2026-02-19 20:06:26.552 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 19 20:06:26 compute-0 nova_compute[188777]: 2026-02-19 20:06:26.553 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:06:26 compute-0 nova_compute[188777]: 2026-02-19 20:06:26.554 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:06:26 compute-0 python3[214328]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb 19 20:06:26 compute-0 sudo[214325]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:27 compute-0 sudo[214478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-neuppajgquzzefvifxpssbtlbikghnfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531586.915467-1382-30796930485535/AnsiballZ_stat.py'
Feb 19 20:06:27 compute-0 sudo[214478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:27 compute-0 python3.9[214481]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:06:27 compute-0 sudo[214478]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:27 compute-0 sudo[214557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilmvlpkssexjlfvphsaajfkfdtsvmuir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531586.915467-1382-30796930485535/AnsiballZ_file.py'
Feb 19 20:06:27 compute-0 sudo[214557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:27 compute-0 python3.9[214560]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:06:27 compute-0 sudo[214557]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:28 compute-0 sudo[214722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlamynsfshkoohljbqraefbgydipwycf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531587.9962606-1394-69328890406780/AnsiballZ_stat.py'
Feb 19 20:06:28 compute-0 sudo[214722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:28 compute-0 podman[214684]: 2026-02-19 20:06:28.346419589 +0000 UTC m=+0.090827688 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 19 20:06:28 compute-0 python3.9[214731]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:06:28 compute-0 sudo[214722]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:28 compute-0 sudo[214813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgujfrbhbnljjadwjralopvknbetodvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531587.9962606-1394-69328890406780/AnsiballZ_file.py'
Feb 19 20:06:28 compute-0 sudo[214813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:28 compute-0 python3.9[214816]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:06:28 compute-0 sudo[214813]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:29 compute-0 sudo[214966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urivjueoavzcvyyihqdlnqdoyhsavqht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531589.0106697-1406-152633266812513/AnsiballZ_stat.py'
Feb 19 20:06:29 compute-0 sudo[214966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:29 compute-0 python3.9[214969]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:06:29 compute-0 sudo[214966]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:29 compute-0 sudo[215045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtfcdoulgglhjuuozccluibyyofxtxew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531589.0106697-1406-152633266812513/AnsiballZ_file.py'
Feb 19 20:06:29 compute-0 sudo[215045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:29 compute-0 python3.9[215048]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:06:29 compute-0 sudo[215045]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:30 compute-0 sudo[215208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esjiqtihsgloqewldfkqacfhrwijemuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531590.0276635-1418-80631390949041/AnsiballZ_stat.py'
Feb 19 20:06:30 compute-0 sudo[215208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:30 compute-0 podman[215172]: 2026-02-19 20:06:30.292753859 +0000 UTC m=+0.052837813 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, architecture=x86_64, distribution-scope=public, io.openshift.tags=minimal rhel9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9/ubi-minimal, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., managed_by=edpm_ansible, build-date=2026-02-05T04:57:10Z)
Feb 19 20:06:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:06:30.412 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:06:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:06:30.413 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:06:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:06:30.413 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:06:30 compute-0 python3.9[215218]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:06:30 compute-0 sudo[215208]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:30 compute-0 sudo[215298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfwwgbobhrkwenwqwrowphjqfntpxtuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531590.0276635-1418-80631390949041/AnsiballZ_file.py'
Feb 19 20:06:30 compute-0 sudo[215298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:30 compute-0 python3.9[215301]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:06:30 compute-0 sudo[215298]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:31 compute-0 sudo[215451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkqihjsvbzeqyapxyjuwslkpxirucutw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531591.030654-1430-2589416327272/AnsiballZ_stat.py'
Feb 19 20:06:31 compute-0 sudo[215451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:31 compute-0 python3.9[215454]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:06:31 compute-0 sudo[215451]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:31 compute-0 sudo[215577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqjcdcpsqdcrizxawsbzunrgavgrpaoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531591.030654-1430-2589416327272/AnsiballZ_copy.py'
Feb 19 20:06:31 compute-0 sudo[215577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:31 compute-0 python3.9[215580]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771531591.030654-1430-2589416327272/.source.nft follow=False _original_basename=ruleset.j2 checksum=fb3275eced3a2e06312143189928124e1b2df34a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:06:32 compute-0 sudo[215577]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:32 compute-0 sudo[215730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlanailnskcimtxuptybtgoulluyyfqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531592.16109-1445-124491304341223/AnsiballZ_file.py'
Feb 19 20:06:32 compute-0 sudo[215730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:32 compute-0 python3.9[215733]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:06:32 compute-0 sudo[215730]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:32 compute-0 sudo[215883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvaccntijthaouzkntbkqcxsuowltiff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531592.7206395-1453-36882402956586/AnsiballZ_command.py'
Feb 19 20:06:32 compute-0 sudo[215883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:33 compute-0 python3.9[215886]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 20:06:33 compute-0 sudo[215883]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:33 compute-0 sudo[216051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbyhtwzkhvtvgmvulzfbkwjihltnvarn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531593.3237922-1461-251470427363716/AnsiballZ_blockinfile.py'
Feb 19 20:06:33 compute-0 sudo[216051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:33 compute-0 podman[216013]: 2026-02-19 20:06:33.759974059 +0000 UTC m=+0.056947773 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:06:33 compute-0 python3.9[216060]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:06:33 compute-0 sudo[216051]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:34 compute-0 sudo[216211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyuuskdxpqtjnrcmuiohfwwhpuzksiwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531594.3535137-1470-168945820593813/AnsiballZ_command.py'
Feb 19 20:06:34 compute-0 sudo[216211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:35 compute-0 python3.9[216214]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 20:06:35 compute-0 sudo[216211]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:35 compute-0 sudo[216365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmqatrdqvukwkhndszdbtssdaevccspz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531595.197415-1478-20822253997168/AnsiballZ_stat.py'
Feb 19 20:06:35 compute-0 sudo[216365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:35 compute-0 python3.9[216368]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 20:06:35 compute-0 sudo[216365]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:35 compute-0 sudo[216520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skjnzpklenvufmijnocmdoxugrqjnuii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531595.7422686-1486-25725503631141/AnsiballZ_command.py'
Feb 19 20:06:35 compute-0 sudo[216520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:36 compute-0 python3.9[216523]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 20:06:36 compute-0 sudo[216520]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:36 compute-0 sudo[216676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqnsurnzixcncmibuqucdijlqexywzgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531596.2873607-1494-249315682914310/AnsiballZ_file.py'
Feb 19 20:06:36 compute-0 sudo[216676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:36 compute-0 python3.9[216679]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:06:36 compute-0 sudo[216676]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:36 compute-0 podman[204724]: time="2026-02-19T20:06:36Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:06:36 compute-0 podman[204724]: @ - - [19/Feb/2026:20:06:36 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 21988 "" "Go-http-client/1.1"
Feb 19 20:06:36 compute-0 podman[204724]: @ - - [19/Feb/2026:20:06:36 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2983 "" "Go-http-client/1.1"
Feb 19 20:06:37 compute-0 sshd-session[189104]: Connection closed by 192.168.122.30 port 52760
Feb 19 20:06:37 compute-0 sshd-session[189101]: pam_unix(sshd:session): session closed for user zuul
Feb 19 20:06:37 compute-0 systemd-logind[810]: Session 25 logged out. Waiting for processes to exit.
Feb 19 20:06:37 compute-0 systemd[1]: session-25.scope: Deactivated successfully.
Feb 19 20:06:37 compute-0 systemd[1]: session-25.scope: Consumed 1min 31.349s CPU time.
Feb 19 20:06:37 compute-0 systemd-logind[810]: Removed session 25.
Feb 19 20:06:38 compute-0 openstack_network_exporter[207898]: ERROR   20:06:38 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:06:38 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:06:38 compute-0 openstack_network_exporter[207898]: ERROR   20:06:38 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:06:38 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:06:43 compute-0 sshd-session[216712]: Accepted publickey for zuul from 192.168.122.30 port 37808 ssh2: ECDSA SHA256:U7+XUhHIIKxaxeCtrtx4n7poU9CMVA2TmDaaiHbw4x0
Feb 19 20:06:43 compute-0 systemd-logind[810]: New session 26 of user zuul.
Feb 19 20:06:43 compute-0 systemd[1]: Started Session 26 of User zuul.
Feb 19 20:06:43 compute-0 sshd-session[216712]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 19 20:06:43 compute-0 podman[216714]: 2026-02-19 20:06:43.287630807 +0000 UTC m=+0.049112765 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 19 20:06:43 compute-0 sudo[216891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdhyjnjmidiafyxfagtiuvleoalvyrex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531603.3728094-19-150427046087355/AnsiballZ_systemd_service.py'
Feb 19 20:06:43 compute-0 sudo[216891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:44 compute-0 python3.9[216894]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 19 20:06:44 compute-0 systemd[1]: Reloading.
Feb 19 20:06:44 compute-0 systemd-rc-local-generator[216923]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:06:44 compute-0 systemd-sysv-generator[216926]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:06:44 compute-0 sudo[216891]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:44 compute-0 podman[216938]: 2026-02-19 20:06:44.617078585 +0000 UTC m=+0.051174445 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:06:45 compute-0 python3.9[217108]: ansible-ansible.builtin.service_facts Invoked
Feb 19 20:06:45 compute-0 network[217125]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 19 20:06:45 compute-0 network[217126]: 'network-scripts' will be removed from distribution in near future.
Feb 19 20:06:45 compute-0 network[217127]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 19 20:06:47 compute-0 podman[217232]: 2026-02-19 20:06:47.390998136 +0000 UTC m=+0.072972977 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20260127, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:06:48 compute-0 sudo[217424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgjeimsruvykxyeenscvhmfapxainmyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531607.834308-42-120852393928649/AnsiballZ_systemd_service.py'
Feb 19 20:06:48 compute-0 sudo[217424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:48 compute-0 python3.9[217427]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 20:06:48 compute-0 sudo[217424]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:48 compute-0 sudo[217578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oauttygryrkhhkqcvodbxholkryqndzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531608.6337447-52-78444170639537/AnsiballZ_file.py'
Feb 19 20:06:48 compute-0 sudo[217578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:49 compute-0 python3.9[217581]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:06:49 compute-0 sudo[217578]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:49 compute-0 sudo[217732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atccyxsndwintaeguqydrgkgiqwaceeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531609.3181572-60-49384443613860/AnsiballZ_file.py'
Feb 19 20:06:49 compute-0 sudo[217732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:49 compute-0 python3.9[217735]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:06:49 compute-0 sudo[217732]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:50 compute-0 sudo[217885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsmjrmwwluchysxpjqkqslubtblpvwij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531609.9561598-69-148961579918301/AnsiballZ_command.py'
Feb 19 20:06:50 compute-0 sudo[217885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:50 compute-0 python3.9[217888]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 20:06:50 compute-0 sudo[217885]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:51 compute-0 python3.9[218041]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 19 20:06:51 compute-0 sudo[218191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kigwhkeisuzrvnihzqqegvvcmfppkuni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531611.586197-87-104551570909091/AnsiballZ_systemd_service.py'
Feb 19 20:06:51 compute-0 sudo[218191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:52 compute-0 python3.9[218194]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 19 20:06:52 compute-0 systemd[1]: Reloading.
Feb 19 20:06:52 compute-0 systemd-rc-local-generator[218222]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:06:52 compute-0 systemd-sysv-generator[218225]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:06:52 compute-0 sudo[218191]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:52 compute-0 sudo[218386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ombbkgjzxvebtdkfpldbrhrumkjnadll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531612.6429758-95-157139441285328/AnsiballZ_command.py'
Feb 19 20:06:52 compute-0 sudo[218386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:53 compute-0 python3.9[218389]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 20:06:53 compute-0 sudo[218386]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:53 compute-0 sudo[218540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npkzldqibvyagdcxcwflcbqkafibqbui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531613.435027-104-141975463492281/AnsiballZ_file.py'
Feb 19 20:06:53 compute-0 sudo[218540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:53 compute-0 python3.9[218543]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/telemetry-power-monitoring recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:06:53 compute-0 sudo[218540]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:54 compute-0 python3.9[218694]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 20:06:55 compute-0 python3.9[218846]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:06:56 compute-0 python3.9[218967]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771531614.90977-120-204870693109112/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:06:56 compute-0 python3.9[219117]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:06:57 compute-0 python3.9[219238]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/firewall.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771531616.2482266-135-216872262811780/.source.yaml _original_basename=firewall.yaml follow=False checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:06:58 compute-0 sudo[219388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivetnvbqlrtabweuockkpqxkjtnxqjgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531617.662347-153-52124936941050/AnsiballZ_getent.py'
Feb 19 20:06:58 compute-0 sudo[219388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:06:58 compute-0 python3.9[219391]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Feb 19 20:06:58 compute-0 sudo[219388]: pam_unix(sudo:session): session closed for user root
Feb 19 20:06:59 compute-0 podman[219516]: 2026-02-19 20:06:59.244005567 +0000 UTC m=+0.059235199 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 19 20:06:59 compute-0 python3.9[219555]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:06:59 compute-0 podman[204724]: time="2026-02-19T20:06:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:06:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:06:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 21988 "" "Go-http-client/1.1"
Feb 19 20:06:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:06:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2987 "" "Go-http-client/1.1"
Feb 19 20:06:59 compute-0 python3.9[219688]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1771531618.962453-181-212092022689554/.source.conf _original_basename=ceilometer.conf follow=False checksum=06bb8599d9c8a601385c703338dd9ca518a4891f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:07:00 compute-0 python3.9[219838]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:07:00 compute-0 podman[219933]: 2026-02-19 20:07:00.749788379 +0000 UTC m=+0.062848944 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, architecture=x86_64, name=ubi9/ubi-minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public)
Feb 19 20:07:00 compute-0 python3.9[219969]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1771531620.0270038-181-144244338490314/.source.yaml _original_basename=polling.yaml follow=False checksum=5ef7021082c6431099dde63e021011029cd65119 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:07:01 compute-0 python3.9[220128]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:07:01 compute-0 openstack_network_exporter[207898]: ERROR   20:07:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:07:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:07:01 compute-0 openstack_network_exporter[207898]: ERROR   20:07:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:07:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:07:01 compute-0 python3.9[220250]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1771531621.04172-181-277952930108892/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:07:02 compute-0 python3.9[220400]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 20:07:03 compute-0 python3.9[220552]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 20:07:03 compute-0 python3.9[220704]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:07:04 compute-0 podman[220799]: 2026-02-19 20:07:04.168006748 +0000 UTC m=+0.061068402 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 19 20:07:04 compute-0 python3.9[220836]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1771531623.394274-240-105768041788297/.source.yaml _original_basename=ceilometer_prom_exporter.yaml follow=False checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:07:04 compute-0 sudo[220994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfxmexcqyaxanmlveqsuiangwhaajsxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531624.5012732-255-20806283539871/AnsiballZ_file.py'
Feb 19 20:07:04 compute-0 sudo[220994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:04 compute-0 python3.9[220997]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:07:04 compute-0 sudo[220994]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:05 compute-0 sudo[221147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbiiasxqkagacxpkzvqbhxqgwxcevvas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531625.0894792-263-280876447652841/AnsiballZ_file.py'
Feb 19 20:07:05 compute-0 sudo[221147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:05 compute-0 python3.9[221150]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:07:05 compute-0 sudo[221147]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:05 compute-0 sudo[221300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsvhsepowtkdxnlzeuceswbgrqynnogb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531625.6177301-271-21090892511932/AnsiballZ_file.py'
Feb 19 20:07:05 compute-0 sudo[221300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:05 compute-0 python3.9[221303]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:07:06 compute-0 sudo[221300]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:06 compute-0 sudo[221453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxpdwyhslhjsgcptjbbsrhdsayzsinjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531626.3406823-279-183484404725059/AnsiballZ_stat.py'
Feb 19 20:07:06 compute-0 sudo[221453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:06 compute-0 python3.9[221456]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:07:06 compute-0 sudo[221453]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:07 compute-0 sudo[221577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtdaxxhawkynejebbtclzveucewumfpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531626.3406823-279-183484404725059/AnsiballZ_copy.py'
Feb 19 20:07:07 compute-0 sudo[221577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:07 compute-0 python3.9[221580]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771531626.3406823-279-183484404725059/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:07:07 compute-0 sudo[221577]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:07 compute-0 sudo[221654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkoohuzluwjkzwywislxuyjyuypkhyko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531626.3406823-279-183484404725059/AnsiballZ_stat.py'
Feb 19 20:07:07 compute-0 sudo[221654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:07 compute-0 python3.9[221657]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:07:07 compute-0 sudo[221654]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:07 compute-0 sudo[221778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeuvycmksxhexwufyddcbgjkggmdxzts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531626.3406823-279-183484404725059/AnsiballZ_copy.py'
Feb 19 20:07:07 compute-0 sudo[221778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:08 compute-0 python3.9[221781]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771531626.3406823-279-183484404725059/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:07:08 compute-0 sudo[221778]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:08 compute-0 sudo[221931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvhnijltaihbgdgiafgvoxnqmcfhffkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531628.2048917-279-4434667090700/AnsiballZ_stat.py'
Feb 19 20:07:08 compute-0 sudo[221931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:08 compute-0 python3.9[221934]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/kepler/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:07:08 compute-0 sudo[221931]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:08 compute-0 sudo[222055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvlqgaavsaisfbhscitrrpzzzixkflhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531628.2048917-279-4434667090700/AnsiballZ_copy.py'
Feb 19 20:07:08 compute-0 sudo[222055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:09 compute-0 python3.9[222058]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/kepler/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771531628.2048917-279-4434667090700/.source _original_basename=healthcheck follow=False checksum=57ed53cc150174efd98819129660d5b9ea9ea61a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:07:09 compute-0 sudo[222055]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:09 compute-0 sudo[222208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjaqsascruymzmjfkdgrzzcsmiydhwby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531629.506108-321-170169869765786/AnsiballZ_file.py'
Feb 19 20:07:09 compute-0 sudo[222208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:09 compute-0 python3.9[222211]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:07:09 compute-0 sudo[222208]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:10 compute-0 sudo[222361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyodqquycizdfjnkfbjmtzcxnezqafqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531630.1095805-329-82791927570623/AnsiballZ_file.py'
Feb 19 20:07:10 compute-0 sudo[222361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:10 compute-0 python3.9[222364]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:07:10 compute-0 sudo[222361]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:10 compute-0 sudo[222514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebsxurniiclijvapwbrbnqlyeseujlmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531630.7303169-337-22919258506652/AnsiballZ_stat.py'
Feb 19 20:07:10 compute-0 sudo[222514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:11 compute-0 python3.9[222517]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:07:11 compute-0 sudo[222514]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:11 compute-0 sudo[222638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sftcpfnsfbjvyoxephtqkxjabbltohvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531630.7303169-337-22919258506652/AnsiballZ_copy.py'
Feb 19 20:07:11 compute-0 sudo[222638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:11 compute-0 python3.9[222641]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ceilometer_agent_ipmi.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1771531630.7303169-337-22919258506652/.source.json _original_basename=.vpnp7_38 follow=False checksum=fa47598aea39469905a43b7b570ec2fd120965fc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:07:11 compute-0 sudo[222638]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:12 compute-0 python3.9[222791]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_ipmi state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:07:13 compute-0 podman[223036]: 2026-02-19 20:07:13.367791588 +0000 UTC m=+0.049114186 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 19 20:07:14 compute-0 sudo[223236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-diujqnlhvngdyigdpdgkuozupgcorwhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531633.8058224-377-178820632380307/AnsiballZ_container_config_data.py'
Feb 19 20:07:14 compute-0 sudo[223236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:14 compute-0 python3.9[223239]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_ipmi config_pattern=*.json debug=False
Feb 19 20:07:14 compute-0 sudo[223236]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:14 compute-0 podman[223264]: 2026-02-19 20:07:14.747230186 +0000 UTC m=+0.067672314 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260216, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0)
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.131 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.132 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.132 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f4126330>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.133 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fa4f6728800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.133 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f4126330>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.134 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f4126330>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.134 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f4126330>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.134 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f4126330>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.134 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f4126330>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.134 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f4126330>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.134 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f4126330>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.135 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.135 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f4126330>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.135 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fa4f672a480>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.135 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f4126330>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.135 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.135 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a390>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f4126330>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.136 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fa4f672a180>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.136 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f4126330>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.136 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.136 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f4126330>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.136 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fa4f672bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.137 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f4126330>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.137 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.137 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f4126330>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.137 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fa4f672a270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.138 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f4126330>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.138 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.138 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f4126330>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.138 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fa4f6728ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.138 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f4126330>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.138 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.139 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f4126330>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.139 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fa4f672a300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.139 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f4126330>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'power.state': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.139 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.140 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f4126330>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'power.state': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.140 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fa4f672ab70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.140 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f4126330>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'power.state': [], 'network.outgoing.bytes.delta': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.140 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.140 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f4126330>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'power.state': [], 'network.outgoing.bytes.delta': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.140 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fa4f6728290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f4126330>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'power.state': [], 'network.outgoing.bytes.delta': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.141 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f4126330>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'power.state': [], 'network.outgoing.bytes.delta': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.141 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fa4f69216a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f4126330>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'power.state': [], 'network.outgoing.bytes.delta': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.142 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.142 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fa4f67286b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.142 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.142 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fa4f67283b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.142 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.143 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fa4f672a120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.143 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.143 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fa4f672a1b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.143 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.143 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fa4f6728410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.143 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.143 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fa4f672a150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.143 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.143 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fa4f6728470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.143 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.143 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fa4f68f6030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.143 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.144 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fa4f672ab10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.144 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.144 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fa4f6728500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.144 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.144 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fa4f672a0c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.144 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.144 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fa4f6728560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.144 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.144 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fa4f67285c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.144 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.144 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fa4f6728620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.144 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.144 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fa4f672be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.144 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.145 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fa4f672be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.145 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.145 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.145 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.145 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.145 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.145 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.146 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.146 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.146 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.146 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.146 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.146 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.146 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.146 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.146 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.147 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.147 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.147 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.147 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.147 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.147 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.147 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.147 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.147 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.148 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.148 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:07:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:07:15.148 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:07:15 compute-0 sudo[223410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njglqqebradbjxxpknotgadiczljxnbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531634.7111733-388-95801502472721/AnsiballZ_container_config_hash.py'
Feb 19 20:07:15 compute-0 sudo[223410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:15 compute-0 python3.9[223413]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 19 20:07:15 compute-0 sudo[223410]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:16 compute-0 sudo[223563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwsnaikfpgrbmlmwihpgxfidplpbfbkq ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771531635.6299965-398-200024921632019/AnsiballZ_edpm_container_manage.py'
Feb 19 20:07:16 compute-0 sudo[223563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:16 compute-0 python3[223566]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ceilometer_agent_ipmi config_id=ceilometer_agent_ipmi config_overrides={} config_patterns=*.json containers=['ceilometer_agent_ipmi'] log_base_path=/var/log/containers/stdouts debug=False
Feb 19 20:07:16 compute-0 podman[223602]: 2026-02-19 20:07:16.570940529 +0000 UTC m=+0.068571011 container create ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, container_name=ceilometer_agent_ipmi, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:07:16 compute-0 podman[223602]: 2026-02-19 20:07:16.539068604 +0000 UTC m=+0.036699136 image pull 5a0c248a731dc2e1754b1906fede374f0f92203547e5b10eb435ef1a64b36296 quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Feb 19 20:07:16 compute-0 python3[223566]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49 --healthcheck-command /openstack/healthcheck ipmi --label config_id=ceilometer_agent_ipmi --label container_name=ceilometer_agent_ipmi --label managed_by=edpm_ansible --label config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z --volume /var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified kolla_start
Feb 19 20:07:16 compute-0 sudo[223563]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:17 compute-0 sudo[223789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dajeznusadwwvvartrobpwnmgjduwovc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531636.859193-406-172606552928506/AnsiballZ_stat.py'
Feb 19 20:07:17 compute-0 sudo[223789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:17 compute-0 python3.9[223792]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 20:07:17 compute-0 sudo[223789]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:17 compute-0 sudo[223955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjahosbgoruwkjsvzsttieluuwjjpgyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531637.540679-415-81923224881331/AnsiballZ_file.py'
Feb 19 20:07:17 compute-0 sudo[223955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:17 compute-0 podman[223918]: 2026-02-19 20:07:17.811127197 +0000 UTC m=+0.080411434 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb 19 20:07:17 compute-0 python3.9[223967]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:07:17 compute-0 sudo[223955]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:18 compute-0 sudo[224048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvssmbbmkwfmmhabdvvrpqkmwcmpouln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531637.540679-415-81923224881331/AnsiballZ_stat.py'
Feb 19 20:07:18 compute-0 sudo[224048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:18 compute-0 python3.9[224051]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 20:07:18 compute-0 sudo[224048]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:18 compute-0 sudo[224200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wadlsjifznrtunuxqzqicgkmvqasvsax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531638.4650512-415-221702272151335/AnsiballZ_copy.py'
Feb 19 20:07:18 compute-0 sudo[224200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:19 compute-0 python3.9[224203]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771531638.4650512-415-221702272151335/source dest=/etc/systemd/system/edpm_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:07:19 compute-0 sudo[224200]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:19 compute-0 sudo[224277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upporozwtgnegqoeuupplvdodrboqjju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531638.4650512-415-221702272151335/AnsiballZ_systemd.py'
Feb 19 20:07:19 compute-0 sudo[224277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:19 compute-0 python3.9[224280]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 19 20:07:19 compute-0 systemd[1]: Reloading.
Feb 19 20:07:20 compute-0 systemd-rc-local-generator[224308]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:07:20 compute-0 systemd-sysv-generator[224311]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:07:20 compute-0 sudo[224277]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:20 compute-0 sudo[224396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxyktbovomlvrfczfyryreiryvapnlyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531638.4650512-415-221702272151335/AnsiballZ_systemd.py'
Feb 19 20:07:20 compute-0 sudo[224396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:20 compute-0 python3.9[224399]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 20:07:20 compute-0 systemd[1]: Reloading.
Feb 19 20:07:21 compute-0 systemd-sysv-generator[224430]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:07:21 compute-0 systemd-rc-local-generator[224425]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:07:21 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Feb 19 20:07:21 compute-0 systemd[1]: Started libcrun container.
Feb 19 20:07:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c08c5aa6104675957d2d569291062dcced615081c10aad26ccd6d578268b902c/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Feb 19 20:07:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c08c5aa6104675957d2d569291062dcced615081c10aad26ccd6d578268b902c/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Feb 19 20:07:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c08c5aa6104675957d2d569291062dcced615081c10aad26ccd6d578268b902c/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Feb 19 20:07:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c08c5aa6104675957d2d569291062dcced615081c10aad26ccd6d578268b902c/merged/var/lib/kolla/config_files/src supports timestamps until 2038 (0x7fffffff)
Feb 19 20:07:21 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e.
Feb 19 20:07:21 compute-0 podman[224445]: 2026-02-19 20:07:21.347461911 +0000 UTC m=+0.141332310 container init ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:07:21 compute-0 ceilometer_agent_ipmi[224460]: + sudo -E kolla_set_configs
Feb 19 20:07:21 compute-0 sudo[224466]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Feb 19 20:07:21 compute-0 sudo[224466]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Feb 19 20:07:21 compute-0 sudo[224466]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Feb 19 20:07:21 compute-0 podman[224445]: 2026-02-19 20:07:21.391062745 +0000 UTC m=+0.184933104 container start ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Feb 19 20:07:21 compute-0 podman[224445]: ceilometer_agent_ipmi
Feb 19 20:07:21 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Feb 19 20:07:21 compute-0 sudo[224396]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:21 compute-0 ceilometer_agent_ipmi[224460]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 19 20:07:21 compute-0 ceilometer_agent_ipmi[224460]: INFO:__main__:Validating config file
Feb 19 20:07:21 compute-0 ceilometer_agent_ipmi[224460]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 19 20:07:21 compute-0 ceilometer_agent_ipmi[224460]: INFO:__main__:Copying service configuration files
Feb 19 20:07:21 compute-0 ceilometer_agent_ipmi[224460]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Feb 19 20:07:21 compute-0 ceilometer_agent_ipmi[224460]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Feb 19 20:07:21 compute-0 ceilometer_agent_ipmi[224460]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Feb 19 20:07:21 compute-0 ceilometer_agent_ipmi[224460]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Feb 19 20:07:21 compute-0 ceilometer_agent_ipmi[224460]: INFO:__main__:Copying /var/lib/kolla/config_files/src/polling.yaml to /etc/ceilometer/polling.yaml
Feb 19 20:07:21 compute-0 ceilometer_agent_ipmi[224460]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Feb 19 20:07:21 compute-0 ceilometer_agent_ipmi[224460]: INFO:__main__:Copying /var/lib/kolla/config_files/src/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Feb 19 20:07:21 compute-0 ceilometer_agent_ipmi[224460]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Feb 19 20:07:21 compute-0 ceilometer_agent_ipmi[224460]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Feb 19 20:07:21 compute-0 ceilometer_agent_ipmi[224460]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Feb 19 20:07:21 compute-0 ceilometer_agent_ipmi[224460]: INFO:__main__:Writing out command to execute
Feb 19 20:07:21 compute-0 sudo[224466]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:21 compute-0 ceilometer_agent_ipmi[224460]: ++ cat /run_command
Feb 19 20:07:21 compute-0 ceilometer_agent_ipmi[224460]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Feb 19 20:07:21 compute-0 ceilometer_agent_ipmi[224460]: + ARGS=
Feb 19 20:07:21 compute-0 ceilometer_agent_ipmi[224460]: + sudo kolla_copy_cacerts
Feb 19 20:07:21 compute-0 sudo[224494]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Feb 19 20:07:21 compute-0 sudo[224494]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Feb 19 20:07:21 compute-0 sudo[224494]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Feb 19 20:07:21 compute-0 sudo[224494]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:21 compute-0 ceilometer_agent_ipmi[224460]: + [[ ! -n '' ]]
Feb 19 20:07:21 compute-0 ceilometer_agent_ipmi[224460]: + . kolla_extend_start
Feb 19 20:07:21 compute-0 ceilometer_agent_ipmi[224460]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Feb 19 20:07:21 compute-0 ceilometer_agent_ipmi[224460]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Feb 19 20:07:21 compute-0 ceilometer_agent_ipmi[224460]: + umask 0022
Feb 19 20:07:21 compute-0 ceilometer_agent_ipmi[224460]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Feb 19 20:07:21 compute-0 podman[224467]: 2026-02-19 20:07:21.494295649 +0000 UTC m=+0.089986561 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 19 20:07:21 compute-0 systemd[1]: ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e-71e6c97d81fbe62.service: Main process exited, code=exited, status=1/FAILURE
Feb 19 20:07:21 compute-0 systemd[1]: ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e-71e6c97d81fbe62.service: Failed with result 'exit-code'.
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.219 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.219 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.219 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.220 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.220 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.220 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.220 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.220 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.220 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.220 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.220 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.220 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.220 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.221 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.221 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.221 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.221 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.221 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.221 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.221 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.221 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.221 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.221 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.221 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.221 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.221 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.221 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.222 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.222 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.222 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.222 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.222 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.222 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.222 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.222 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.222 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.222 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.222 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.222 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.222 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.222 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.223 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.223 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.223 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.223 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.223 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.223 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.223 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.223 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.223 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.223 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.223 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.223 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.224 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.224 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.224 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.224 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.224 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.224 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.224 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.224 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.224 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.224 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.224 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.224 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.224 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.225 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.225 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.225 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.225 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.225 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.225 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.225 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.225 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.225 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.225 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.225 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.225 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.225 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.226 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.226 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.226 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.226 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.226 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.226 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.226 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.226 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.226 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.226 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.226 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.226 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.226 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.226 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.227 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.227 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.227 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.227 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.227 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.227 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.227 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.227 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.227 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.227 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.227 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.227 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.227 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.228 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.228 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.228 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.228 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.228 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.228 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.228 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.228 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.228 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.228 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.228 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.228 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.229 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.229 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.229 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.229 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.229 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.229 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.229 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.229 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.229 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.229 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.229 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.229 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.229 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.230 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.230 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.230 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.230 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.230 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.230 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.230 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.230 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.230 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.230 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.230 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.230 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.230 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.230 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.231 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.231 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.231 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.231 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.231 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.231 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.231 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.231 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.231 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.231 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.231 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.231 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.231 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.231 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.232 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.232 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.232 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.232 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.232 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.232 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.252 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.254 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.256 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.348 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp5lbv9qw2/privsep.sock']
Feb 19 20:07:22 compute-0 python3.9[224640]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 19 20:07:22 compute-0 sudo[224645]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmp5lbv9qw2/privsep.sock
Feb 19 20:07:22 compute-0 sudo[224645]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Feb 19 20:07:22 compute-0 sudo[224645]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Feb 19 20:07:22 compute-0 sudo[224645]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.912 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.913 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp5lbv9qw2/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.783 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.786 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.788 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Feb 19 20:07:22 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:22.788 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.040 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.040 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.042 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.042 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.042 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.043 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.043 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.043 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.043 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.043 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.044 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.044 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.044 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.049 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.050 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.050 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.050 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.050 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.050 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.050 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.051 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.051 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.051 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.051 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.051 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.051 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.052 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.052 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.052 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.053 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.053 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.053 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.053 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.053 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.053 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.054 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.054 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.054 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.054 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.054 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.054 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.055 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.055 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.055 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.055 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.055 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.055 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.056 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.056 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.056 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.056 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.056 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.056 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.057 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.057 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.057 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.057 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.057 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.058 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.058 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.058 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.058 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.058 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.059 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.059 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.059 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.059 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.059 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.060 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.060 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.060 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.060 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.060 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.060 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.061 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.061 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.061 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.061 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.061 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.062 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.062 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.062 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.062 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.062 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 sudo[224802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwzvoiogdbwikgfxupxmbfvzmmdirxpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531642.7931042-460-65586233584955/AnsiballZ_stat.py'
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.063 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.063 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.063 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.063 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.063 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.064 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.064 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.064 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.064 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.064 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.064 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.065 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.065 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.065 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.065 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.065 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.066 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.066 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.066 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.066 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.066 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.067 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.067 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.067 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.067 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.067 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.067 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.068 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.068 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.068 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.068 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.068 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.069 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.069 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.069 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.069 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.069 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.070 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.070 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.070 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.070 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 sudo[224802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.070 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.071 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.071 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.071 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.071 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.071 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.072 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.072 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.072 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.072 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.072 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.073 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.073 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.073 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.073 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.073 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.074 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.074 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.074 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.074 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.074 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.075 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.075 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.075 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.075 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.075 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.075 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.076 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.076 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.076 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.076 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.076 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.076 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.076 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.076 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.077 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.077 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.077 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.077 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.077 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.077 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.077 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.077 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.078 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.078 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.078 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.078 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.078 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.078 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.078 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.078 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.078 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.078 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.079 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.079 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.079 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.079 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.079 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.079 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.079 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.079 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.080 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.080 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.080 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.080 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.080 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.080 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.080 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.080 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.081 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.081 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.081 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.081 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.081 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.081 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.081 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.081 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.081 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.082 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.082 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.082 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.082 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.082 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.082 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.082 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.082 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.082 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.083 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.083 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.083 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.083 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Feb 19 20:07:23 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:23.085 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Feb 19 20:07:23 compute-0 python3.9[224805]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:07:23 compute-0 sudo[224802]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:23 compute-0 sudo[224929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppnbrkiofsgsfzjbsvjbjcanchkxnoqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531642.7931042-460-65586233584955/AnsiballZ_copy.py'
Feb 19 20:07:23 compute-0 sudo[224929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:23 compute-0 python3.9[224932]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771531642.7931042-460-65586233584955/.source.yaml _original_basename=.ul8olss8 follow=False checksum=d91806b13d92d39255bd343b0470f9ef53fe886f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:07:23 compute-0 sudo[224929]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:24 compute-0 nova_compute[188777]: 2026-02-19 20:07:24.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:07:24 compute-0 sudo[225082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-withpkszvigvxzfhvfjcbvlyjhjbllyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531644.127241-477-54576595673720/AnsiballZ_file.py'
Feb 19 20:07:24 compute-0 sudo[225082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:24 compute-0 python3.9[225085]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:07:24 compute-0 sudo[225082]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:25 compute-0 sudo[225235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zojrehlqswrycbajqbquyksjrnqdqeaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531644.7741287-485-35154011622337/AnsiballZ_file.py'
Feb 19 20:07:25 compute-0 sudo[225235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:25 compute-0 python3.9[225238]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 19 20:07:25 compute-0 nova_compute[188777]: 2026-02-19 20:07:25.260 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:07:25 compute-0 nova_compute[188777]: 2026-02-19 20:07:25.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:07:25 compute-0 nova_compute[188777]: 2026-02-19 20:07:25.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:07:25 compute-0 sudo[225235]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:25 compute-0 nova_compute[188777]: 2026-02-19 20:07:25.291 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:07:25 compute-0 nova_compute[188777]: 2026-02-19 20:07:25.292 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:07:25 compute-0 nova_compute[188777]: 2026-02-19 20:07:25.293 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:07:25 compute-0 nova_compute[188777]: 2026-02-19 20:07:25.293 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:07:25 compute-0 nova_compute[188777]: 2026-02-19 20:07:25.470 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:07:25 compute-0 nova_compute[188777]: 2026-02-19 20:07:25.471 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5800MB free_disk=72.30143737792969GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:07:25 compute-0 nova_compute[188777]: 2026-02-19 20:07:25.471 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:07:25 compute-0 nova_compute[188777]: 2026-02-19 20:07:25.471 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:07:25 compute-0 nova_compute[188777]: 2026-02-19 20:07:25.519 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:07:25 compute-0 nova_compute[188777]: 2026-02-19 20:07:25.520 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:07:25 compute-0 nova_compute[188777]: 2026-02-19 20:07:25.540 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:07:25 compute-0 nova_compute[188777]: 2026-02-19 20:07:25.551 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:07:25 compute-0 nova_compute[188777]: 2026-02-19 20:07:25.553 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:07:25 compute-0 nova_compute[188777]: 2026-02-19 20:07:25.553 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.082s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:07:25 compute-0 python3.9[225388]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/kepler state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:07:26 compute-0 nova_compute[188777]: 2026-02-19 20:07:26.552 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:07:26 compute-0 nova_compute[188777]: 2026-02-19 20:07:26.553 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:07:26 compute-0 nova_compute[188777]: 2026-02-19 20:07:26.553 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 19 20:07:26 compute-0 nova_compute[188777]: 2026-02-19 20:07:26.576 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 19 20:07:26 compute-0 nova_compute[188777]: 2026-02-19 20:07:26.577 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:07:26 compute-0 nova_compute[188777]: 2026-02-19 20:07:26.577 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:07:26 compute-0 nova_compute[188777]: 2026-02-19 20:07:26.577 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:07:27 compute-0 nova_compute[188777]: 2026-02-19 20:07:27.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:07:27 compute-0 sudo[225809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uynquldkrbjkurcxcyipllqhyxhmzjbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531647.2667623-519-135026755301422/AnsiballZ_container_config_data.py'
Feb 19 20:07:27 compute-0 sudo[225809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:27 compute-0 python3.9[225812]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/kepler config_pattern=*.json debug=False
Feb 19 20:07:27 compute-0 sudo[225809]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:28 compute-0 sudo[225962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkhjslzqxctmawwxdhdqyadlefyddzjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531647.975144-530-177848710023245/AnsiballZ_container_config_hash.py'
Feb 19 20:07:28 compute-0 sudo[225962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:28 compute-0 nova_compute[188777]: 2026-02-19 20:07:28.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:07:28 compute-0 python3.9[225965]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 19 20:07:28 compute-0 sudo[225962]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:28 compute-0 sudo[226115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvbmbpnzpznyrgtsdxsskgclvskuykne ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771531648.7218862-540-24080760215172/AnsiballZ_edpm_container_manage.py'
Feb 19 20:07:28 compute-0 sudo[226115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:29 compute-0 python3[226118]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/kepler config_id=kepler config_overrides={} config_patterns=*.json containers=['kepler'] log_base_path=/var/log/containers/stdouts debug=False
Feb 19 20:07:29 compute-0 podman[226151]: 2026-02-19 20:07:29.390937304 +0000 UTC m=+0.065266053 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 19 20:07:29 compute-0 podman[226158]: 2026-02-19 20:07:29.394372294 +0000 UTC m=+0.068297212 container create 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, config_id=kepler, architecture=x86_64, maintainer=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, com.redhat.component=ubi9-container, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, vcs-type=git, managed_by=edpm_ansible, vendor=Red Hat, Inc., distribution-scope=public, version=9.4, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, release-0.7.12=)
Feb 19 20:07:29 compute-0 podman[226158]: 2026-02-19 20:07:29.357255077 +0000 UTC m=+0.031179965 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Feb 19 20:07:29 compute-0 podman[204724]: time="2026-02-19T20:07:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:07:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:07:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28001 "" "Go-http-client/1.1"
Feb 19 20:07:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:07:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3424 "" "Go-http-client/1.1"
Feb 19 20:07:30 compute-0 python3[226118]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name kepler --conmon-pidfile /run/kepler.pid --env ENABLE_GPU=true --env ENABLE_PROCESS_METRICS=true --env EXPOSE_CONTAINER_METRICS=true --env EXPOSE_ESTIMATED_IDLE_POWER_METRICS=false --env EXPOSE_VM_METRICS=true --env LIBVIRT_METADATA_URI=http://openstack.org/xmlns/libvirt/nova/1.1 --healthcheck-command /openstack/healthcheck kepler --label config_id=kepler --label container_name=kepler --label managed_by=edpm_ansible --label config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 8888:8888 --volume /lib/modules:/lib/modules:ro --volume /run/libvirt:/run/libvirt:shared,ro --volume /sys:/sys --volume /proc:/proc --volume /var/lib/openstack/healthchecks/kepler:/openstack:ro,z quay.io/sustainable_computing_io/kepler:release-0.7.12 -v=2
Feb 19 20:07:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:07:30.413 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:07:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:07:30.414 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:07:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:07:30.414 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:07:30 compute-0 sudo[226115]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:30 compute-0 sudo[226381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwdytsmxxeueyczzzxgeajqvghqpfaew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531650.658598-548-88863771679627/AnsiballZ_stat.py'
Feb 19 20:07:30 compute-0 sudo[226381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:30 compute-0 podman[226342]: 2026-02-19 20:07:30.919613709 +0000 UTC m=+0.065000715 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=openstack_network_exporter, container_name=openstack_network_exporter, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, architecture=x86_64, build-date=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git)
Feb 19 20:07:31 compute-0 python3.9[226390]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 20:07:31 compute-0 sudo[226381]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:31 compute-0 openstack_network_exporter[207898]: ERROR   20:07:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:07:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:07:31 compute-0 openstack_network_exporter[207898]: ERROR   20:07:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:07:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:07:31 compute-0 sudo[226544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayijqckstknrmbajqszbczhfnlhfzatk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531651.376051-557-169535252895153/AnsiballZ_file.py'
Feb 19 20:07:31 compute-0 sudo[226544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:31 compute-0 python3.9[226547]: ansible-file Invoked with path=/etc/systemd/system/edpm_kepler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:07:31 compute-0 sudo[226544]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:32 compute-0 sudo[226621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onbrgurbgsbsjumlvftnomhlkssxpjcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531651.376051-557-169535252895153/AnsiballZ_stat.py'
Feb 19 20:07:32 compute-0 sudo[226621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:32 compute-0 python3.9[226624]: ansible-stat Invoked with path=/etc/systemd/system/edpm_kepler_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 20:07:32 compute-0 sudo[226621]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:32 compute-0 sudo[226773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbuhptdgimgyvckpihprdwprxmvsbked ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531652.2882173-557-168863010257575/AnsiballZ_copy.py'
Feb 19 20:07:32 compute-0 sudo[226773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:32 compute-0 python3.9[226776]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771531652.2882173-557-168863010257575/source dest=/etc/systemd/system/edpm_kepler.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:07:32 compute-0 sudo[226773]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:33 compute-0 sudo[226850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvuowxcwxfbyfsqtbqqggfhsxgjwwdnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531652.2882173-557-168863010257575/AnsiballZ_systemd.py'
Feb 19 20:07:33 compute-0 sudo[226850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:33 compute-0 python3.9[226853]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 19 20:07:33 compute-0 systemd[1]: Reloading.
Feb 19 20:07:33 compute-0 systemd-sysv-generator[226889]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:07:33 compute-0 systemd-rc-local-generator[226882]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:07:33 compute-0 sudo[226850]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:33 compute-0 sudo[226969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zscsbeibwfbppaehaoyrkiyllrvhhjwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531652.2882173-557-168863010257575/AnsiballZ_systemd.py'
Feb 19 20:07:33 compute-0 sudo[226969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:34 compute-0 python3.9[226972]: ansible-systemd Invoked with state=restarted name=edpm_kepler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 19 20:07:34 compute-0 systemd[1]: Reloading.
Feb 19 20:07:34 compute-0 podman[226975]: 2026-02-19 20:07:34.259133539 +0000 UTC m=+0.054204376 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 19 20:07:34 compute-0 systemd-rc-local-generator[227016]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 19 20:07:34 compute-0 systemd-sysv-generator[227021]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 19 20:07:34 compute-0 systemd[1]: Starting kepler container...
Feb 19 20:07:34 compute-0 systemd[1]: Started libcrun container.
Feb 19 20:07:34 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce.
Feb 19 20:07:34 compute-0 podman[227038]: 2026-02-19 20:07:34.617085729 +0000 UTC m=+0.107318967 container init 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, name=ubi9, release-0.7.12=, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, version=9.4, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_id=kepler, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30)
Feb 19 20:07:34 compute-0 kepler[227054]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Feb 19 20:07:34 compute-0 podman[227038]: 2026-02-19 20:07:34.64222424 +0000 UTC m=+0.132457468 container start 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.expose-services=, config_id=kepler, vcs-type=git, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, version=9.4, name=ubi9, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, release-0.7.12=, architecture=x86_64)
Feb 19 20:07:34 compute-0 kepler[227054]: I0219 20:07:34.644755       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Feb 19 20:07:34 compute-0 kepler[227054]: I0219 20:07:34.644888       1 config.go:293] using gCgroup ID in the BPF program: true
Feb 19 20:07:34 compute-0 kepler[227054]: I0219 20:07:34.644917       1 config.go:295] kernel version: 5.14
Feb 19 20:07:34 compute-0 kepler[227054]: I0219 20:07:34.645564       1 power.go:78] Unable to obtain power, use estimate method
Feb 19 20:07:34 compute-0 kepler[227054]: I0219 20:07:34.645589       1 redfish.go:169] failed to get redfish credential file path
Feb 19 20:07:34 compute-0 kepler[227054]: I0219 20:07:34.645925       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Feb 19 20:07:34 compute-0 kepler[227054]: I0219 20:07:34.645940       1 power.go:79] using none to obtain power
Feb 19 20:07:34 compute-0 kepler[227054]: E0219 20:07:34.645952       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Feb 19 20:07:34 compute-0 kepler[227054]: E0219 20:07:34.645977       1 exporter.go:154] failed to init GPU accelerators: no devices found
Feb 19 20:07:34 compute-0 podman[227038]: kepler
Feb 19 20:07:34 compute-0 kepler[227054]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Feb 19 20:07:34 compute-0 kepler[227054]: I0219 20:07:34.647853       1 exporter.go:84] Number of CPUs: 8
Feb 19 20:07:34 compute-0 systemd[1]: Started kepler container.
Feb 19 20:07:34 compute-0 sudo[226969]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:34 compute-0 podman[227064]: 2026-02-19 20:07:34.719551152 +0000 UTC m=+0.071112939 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, vendor=Red Hat, Inc., version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., name=ubi9, distribution-scope=public, release=1214.1726694543, container_name=kepler, io.buildah.version=1.29.0, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 19 20:07:34 compute-0 systemd[1]: 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce-500fb78452805d58.service: Main process exited, code=exited, status=1/FAILURE
Feb 19 20:07:34 compute-0 systemd[1]: 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce-500fb78452805d58.service: Failed with result 'exit-code'.
Feb 19 20:07:34 compute-0 kepler[227054]: I0219 20:07:34.991524       1 watcher.go:83] Using in cluster k8s config
Feb 19 20:07:34 compute-0 kepler[227054]: I0219 20:07:34.991556       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Feb 19 20:07:34 compute-0 kepler[227054]: E0219 20:07:34.991624       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Feb 19 20:07:34 compute-0 kepler[227054]: I0219 20:07:34.994376       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Feb 19 20:07:34 compute-0 kepler[227054]: I0219 20:07:34.994403       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Feb 19 20:07:35 compute-0 kepler[227054]: I0219 20:07:35.000510       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Feb 19 20:07:35 compute-0 kepler[227054]: I0219 20:07:35.000539       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Feb 19 20:07:35 compute-0 kepler[227054]: I0219 20:07:35.013668       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Feb 19 20:07:35 compute-0 kepler[227054]: I0219 20:07:35.013697       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Feb 19 20:07:35 compute-0 kepler[227054]: I0219 20:07:35.013707       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Feb 19 20:07:35 compute-0 kepler[227054]: I0219 20:07:35.019640       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Feb 19 20:07:35 compute-0 kepler[227054]: I0219 20:07:35.019675       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Feb 19 20:07:35 compute-0 kepler[227054]: I0219 20:07:35.019680       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Feb 19 20:07:35 compute-0 kepler[227054]: I0219 20:07:35.019686       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Feb 19 20:07:35 compute-0 kepler[227054]: I0219 20:07:35.019692       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Feb 19 20:07:35 compute-0 kepler[227054]: I0219 20:07:35.019702       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Feb 19 20:07:35 compute-0 kepler[227054]: I0219 20:07:35.020560       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Feb 19 20:07:35 compute-0 kepler[227054]: I0219 20:07:35.020653       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Feb 19 20:07:35 compute-0 kepler[227054]: I0219 20:07:35.020680       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Feb 19 20:07:35 compute-0 kepler[227054]: I0219 20:07:35.021102       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Feb 19 20:07:35 compute-0 kepler[227054]: I0219 20:07:35.021608       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Feb 19 20:07:35 compute-0 kepler[227054]: I0219 20:07:35.022227       1 exporter.go:208] Started Kepler in 377.659549ms
Feb 19 20:07:35 compute-0 python3.9[227246]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 19 20:07:36 compute-0 sudo[227396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enqohyoowwoigspahpvmtoonjcjdspjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531655.8246787-602-119794284477469/AnsiballZ_stat.py'
Feb 19 20:07:36 compute-0 sudo[227396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:36 compute-0 python3.9[227399]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:07:36 compute-0 sudo[227396]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:36 compute-0 sudo[227522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kybyqwjfzkiyldoecccdfzzmnspubswi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531655.8246787-602-119794284477469/AnsiballZ_copy.py'
Feb 19 20:07:36 compute-0 sudo[227522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:36 compute-0 python3.9[227525]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771531655.8246787-602-119794284477469/.source.yaml _original_basename=.k810v9em follow=False checksum=917e01b03f8f60df33d8c296bdeba404fba2e678 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:07:36 compute-0 sudo[227522]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:37 compute-0 sudo[227675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pplyweopoksnvqitctfqmrrfslyqnaoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531657.175277-617-160359110986131/AnsiballZ_systemd.py'
Feb 19 20:07:37 compute-0 sudo[227675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:37 compute-0 python3.9[227678]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_ipmi.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 19 20:07:38 compute-0 systemd[1]: Stopping ceilometer_agent_ipmi container...
Feb 19 20:07:38 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:38.142 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Feb 19 20:07:38 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:38.244 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Feb 19 20:07:38 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:38.245 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Feb 19 20:07:38 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:38.245 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Feb 19 20:07:38 compute-0 ceilometer_agent_ipmi[224460]: 2026-02-19 20:07:38.252 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Feb 19 20:07:38 compute-0 systemd[1]: libpod-ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e.scope: Deactivated successfully.
Feb 19 20:07:38 compute-0 systemd[1]: libpod-ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e.scope: Consumed 1.999s CPU time.
Feb 19 20:07:38 compute-0 podman[227682]: 2026-02-19 20:07:38.426982478 +0000 UTC m=+0.347459160 container died ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 19 20:07:38 compute-0 systemd[1]: ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e-71e6c97d81fbe62.timer: Deactivated successfully.
Feb 19 20:07:38 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e.
Feb 19 20:07:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e-userdata-shm.mount: Deactivated successfully.
Feb 19 20:07:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-c08c5aa6104675957d2d569291062dcced615081c10aad26ccd6d578268b902c-merged.mount: Deactivated successfully.
Feb 19 20:07:38 compute-0 podman[227682]: 2026-02-19 20:07:38.511862967 +0000 UTC m=+0.432339659 container cleanup ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:07:38 compute-0 podman[227682]: ceilometer_agent_ipmi
Feb 19 20:07:38 compute-0 podman[227710]: ceilometer_agent_ipmi
Feb 19 20:07:38 compute-0 systemd[1]: edpm_ceilometer_agent_ipmi.service: Deactivated successfully.
Feb 19 20:07:38 compute-0 systemd[1]: Stopped ceilometer_agent_ipmi container.
Feb 19 20:07:38 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Feb 19 20:07:38 compute-0 systemd[1]: Started libcrun container.
Feb 19 20:07:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c08c5aa6104675957d2d569291062dcced615081c10aad26ccd6d578268b902c/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Feb 19 20:07:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c08c5aa6104675957d2d569291062dcced615081c10aad26ccd6d578268b902c/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Feb 19 20:07:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c08c5aa6104675957d2d569291062dcced615081c10aad26ccd6d578268b902c/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Feb 19 20:07:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c08c5aa6104675957d2d569291062dcced615081c10aad26ccd6d578268b902c/merged/var/lib/kolla/config_files/src supports timestamps until 2038 (0x7fffffff)
Feb 19 20:07:38 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e.
Feb 19 20:07:38 compute-0 podman[227722]: 2026-02-19 20:07:38.868827196 +0000 UTC m=+0.235958183 container init ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi)
Feb 19 20:07:38 compute-0 ceilometer_agent_ipmi[227737]: + sudo -E kolla_set_configs
Feb 19 20:07:38 compute-0 sudo[227743]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Feb 19 20:07:38 compute-0 sudo[227743]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Feb 19 20:07:38 compute-0 sudo[227743]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Feb 19 20:07:38 compute-0 podman[227722]: 2026-02-19 20:07:38.922085011 +0000 UTC m=+0.289215968 container start ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 19 20:07:38 compute-0 podman[227722]: ceilometer_agent_ipmi
Feb 19 20:07:38 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Feb 19 20:07:38 compute-0 sudo[227675]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:38 compute-0 ceilometer_agent_ipmi[227737]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 19 20:07:38 compute-0 ceilometer_agent_ipmi[227737]: INFO:__main__:Validating config file
Feb 19 20:07:38 compute-0 ceilometer_agent_ipmi[227737]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 19 20:07:38 compute-0 ceilometer_agent_ipmi[227737]: INFO:__main__:Copying service configuration files
Feb 19 20:07:38 compute-0 ceilometer_agent_ipmi[227737]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Feb 19 20:07:38 compute-0 ceilometer_agent_ipmi[227737]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Feb 19 20:07:38 compute-0 ceilometer_agent_ipmi[227737]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Feb 19 20:07:38 compute-0 ceilometer_agent_ipmi[227737]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Feb 19 20:07:38 compute-0 ceilometer_agent_ipmi[227737]: INFO:__main__:Copying /var/lib/kolla/config_files/src/polling.yaml to /etc/ceilometer/polling.yaml
Feb 19 20:07:38 compute-0 ceilometer_agent_ipmi[227737]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Feb 19 20:07:38 compute-0 ceilometer_agent_ipmi[227737]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Feb 19 20:07:38 compute-0 ceilometer_agent_ipmi[227737]: INFO:__main__:Copying /var/lib/kolla/config_files/src/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Feb 19 20:07:38 compute-0 ceilometer_agent_ipmi[227737]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Feb 19 20:07:38 compute-0 ceilometer_agent_ipmi[227737]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Feb 19 20:07:38 compute-0 ceilometer_agent_ipmi[227737]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Feb 19 20:07:38 compute-0 ceilometer_agent_ipmi[227737]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Feb 19 20:07:38 compute-0 ceilometer_agent_ipmi[227737]: INFO:__main__:Writing out command to execute
Feb 19 20:07:38 compute-0 sudo[227743]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:38 compute-0 ceilometer_agent_ipmi[227737]: ++ cat /run_command
Feb 19 20:07:38 compute-0 ceilometer_agent_ipmi[227737]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Feb 19 20:07:38 compute-0 ceilometer_agent_ipmi[227737]: + ARGS=
Feb 19 20:07:38 compute-0 ceilometer_agent_ipmi[227737]: + sudo kolla_copy_cacerts
Feb 19 20:07:39 compute-0 sudo[227762]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Feb 19 20:07:39 compute-0 sudo[227762]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Feb 19 20:07:39 compute-0 sudo[227762]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Feb 19 20:07:39 compute-0 sudo[227762]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: + [[ ! -n '' ]]
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: + . kolla_extend_start
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: + umask 0022
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Feb 19 20:07:39 compute-0 podman[227745]: 2026-02-19 20:07:39.041106044 +0000 UTC m=+0.111948362 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 19 20:07:39 compute-0 systemd[1]: ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e-642cacd02e48a86a.service: Main process exited, code=exited, status=1/FAILURE
Feb 19 20:07:39 compute-0 systemd[1]: ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e-642cacd02e48a86a.service: Failed with result 'exit-code'.
Feb 19 20:07:39 compute-0 sudo[227918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnjywadqmmfqvnkkfttqoywkdpekltmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531659.2008119-625-165504394673687/AnsiballZ_systemd.py'
Feb 19 20:07:39 compute-0 sudo[227918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.794 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.794 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.794 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.794 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.795 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.795 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.795 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.795 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.795 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.795 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.795 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.795 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.795 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.795 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.796 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.796 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.796 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.796 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.796 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.796 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.796 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.796 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.796 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.796 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.796 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.797 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.797 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.797 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.797 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.797 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.797 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.797 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.797 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.797 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.797 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.797 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.797 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.798 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.798 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.798 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.798 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.798 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.798 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.798 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.798 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.798 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.798 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.798 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.799 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.799 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.799 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.799 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.799 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.799 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.799 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.799 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.799 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.799 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.799 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.799 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.800 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.800 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.800 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.800 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.800 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.800 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.800 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.800 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.800 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.800 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.801 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.801 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.801 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.801 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.801 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.801 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.801 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.801 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.801 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.801 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.801 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.801 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.802 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.802 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.802 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.802 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.802 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.802 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.802 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.802 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.802 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.802 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.802 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.803 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.803 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.803 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.803 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.803 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.803 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.803 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.803 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.803 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.803 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.803 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.804 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.804 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.804 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.804 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.804 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.804 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.804 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.804 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.804 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.804 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.804 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.804 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.805 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.805 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.805 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.805 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.805 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.805 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.805 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.805 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.805 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.805 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.805 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.806 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.806 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.806 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.806 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.806 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.806 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.806 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.806 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.806 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.806 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.806 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.806 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.807 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.807 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.807 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.807 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.807 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.807 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.807 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.807 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.807 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.808 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.808 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.808 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.808 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.808 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.808 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.808 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.808 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.808 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.809 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.809 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.809 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.809 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.809 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.809 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.809 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.809 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.828 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.830 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.831 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Feb 19 20:07:39 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:39.849 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp8fzew1o2/privsep.sock']
Feb 19 20:07:39 compute-0 sudo[227926]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmp8fzew1o2/privsep.sock
Feb 19 20:07:39 compute-0 sudo[227926]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Feb 19 20:07:39 compute-0 sudo[227926]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Feb 19 20:07:39 compute-0 python3.9[227921]: ansible-ansible.builtin.systemd Invoked with name=edpm_kepler.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 19 20:07:39 compute-0 systemd[1]: Stopping kepler container...
Feb 19 20:07:40 compute-0 kepler[227054]: I0219 20:07:40.010611       1 exporter.go:218] Received shutdown signal
Feb 19 20:07:40 compute-0 kepler[227054]: I0219 20:07:40.011426       1 exporter.go:226] Exiting...
Feb 19 20:07:40 compute-0 systemd[1]: libpod-9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce.scope: Deactivated successfully.
Feb 19 20:07:40 compute-0 podman[227932]: 2026-02-19 20:07:40.192577124 +0000 UTC m=+0.232369180 container died 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, config_id=kepler, vendor=Red Hat, Inc., version=9.4, container_name=kepler, io.openshift.tags=base rhel9, name=ubi9, release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible)
Feb 19 20:07:40 compute-0 systemd[1]: 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce-500fb78452805d58.timer: Deactivated successfully.
Feb 19 20:07:40 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce.
Feb 19 20:07:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce-userdata-shm.mount: Deactivated successfully.
Feb 19 20:07:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-794ff0d8cb87c7e94731559330a52fa4255579c0dc48116e6c71d37fbbcf3f6b-merged.mount: Deactivated successfully.
Feb 19 20:07:40 compute-0 podman[227932]: 2026-02-19 20:07:40.244480357 +0000 UTC m=+0.284272433 container cleanup 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, com.redhat.component=ubi9-container, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, io.openshift.tags=base rhel9, name=ubi9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Feb 19 20:07:40 compute-0 podman[227932]: kepler
Feb 19 20:07:40 compute-0 podman[227961]: kepler
Feb 19 20:07:40 compute-0 systemd[1]: edpm_kepler.service: Deactivated successfully.
Feb 19 20:07:40 compute-0 systemd[1]: Stopped kepler container.
Feb 19 20:07:40 compute-0 systemd[1]: Starting kepler container...
Feb 19 20:07:40 compute-0 systemd[1]: Started libcrun container.
Feb 19 20:07:40 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce.
Feb 19 20:07:40 compute-0 podman[227974]: 2026-02-19 20:07:40.467651866 +0000 UTC m=+0.116630190 container init 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, vcs-type=git, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, release=1214.1726694543, version=9.4, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, config_id=kepler, maintainer=Red Hat, Inc.)
Feb 19 20:07:40 compute-0 kepler[227990]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Feb 19 20:07:40 compute-0 sudo[227926]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.499 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.500 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp8fzew1o2/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.361 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.365 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.367 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.368 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Feb 19 20:07:40 compute-0 kepler[227990]: I0219 20:07:40.503177       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Feb 19 20:07:40 compute-0 kepler[227990]: I0219 20:07:40.503341       1 config.go:293] using gCgroup ID in the BPF program: true
Feb 19 20:07:40 compute-0 kepler[227990]: I0219 20:07:40.503397       1 config.go:295] kernel version: 5.14
Feb 19 20:07:40 compute-0 kepler[227990]: I0219 20:07:40.503978       1 power.go:78] Unable to obtain power, use estimate method
Feb 19 20:07:40 compute-0 kepler[227990]: I0219 20:07:40.504008       1 redfish.go:169] failed to get redfish credential file path
Feb 19 20:07:40 compute-0 kepler[227990]: I0219 20:07:40.504375       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Feb 19 20:07:40 compute-0 kepler[227990]: I0219 20:07:40.504392       1 power.go:79] using none to obtain power
Feb 19 20:07:40 compute-0 kepler[227990]: E0219 20:07:40.504406       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Feb 19 20:07:40 compute-0 kepler[227990]: E0219 20:07:40.504430       1 exporter.go:154] failed to init GPU accelerators: no devices found
Feb 19 20:07:40 compute-0 podman[227974]: 2026-02-19 20:07:40.504892188 +0000 UTC m=+0.153870512 container start 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vcs-type=git, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, release=1214.1726694543, version=9.4, name=ubi9, vendor=Red Hat, Inc., architecture=x86_64, container_name=kepler)
Feb 19 20:07:40 compute-0 kepler[227990]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Feb 19 20:07:40 compute-0 kepler[227990]: I0219 20:07:40.506597       1 exporter.go:84] Number of CPUs: 8
Feb 19 20:07:40 compute-0 podman[227974]: kepler
Feb 19 20:07:40 compute-0 systemd[1]: Started kepler container.
Feb 19 20:07:40 compute-0 sudo[227918]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:40 compute-0 podman[228000]: 2026-02-19 20:07:40.608422505 +0000 UTC m=+0.088535747 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, version=9.4, io.openshift.expose-services=, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, managed_by=edpm_ansible, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, architecture=x86_64, io.openshift.tags=base rhel9, release-0.7.12=, build-date=2024-09-18T21:23:30)
Feb 19 20:07:40 compute-0 systemd[1]: 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce-64ca9ae3bb96b419.service: Main process exited, code=exited, status=1/FAILURE
Feb 19 20:07:40 compute-0 systemd[1]: 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce-64ca9ae3bb96b419.service: Failed with result 'exit-code'.
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.613 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.614 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.615 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.615 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.615 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.615 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.616 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.616 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.616 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.616 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.616 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.616 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.616 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.619 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.619 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.619 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.619 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.620 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.620 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.620 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.620 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.620 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.620 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.620 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.620 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.620 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.621 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.621 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.621 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.621 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.621 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.621 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.621 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.621 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.621 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.622 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.622 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.622 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.622 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.622 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.622 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.622 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.622 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.622 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.622 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.622 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.622 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.623 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.623 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.623 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.623 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.623 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.623 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.623 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.623 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.623 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.624 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.624 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.624 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.624 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.624 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.624 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.624 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.624 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.624 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.625 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.625 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.625 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.625 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.625 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.625 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.625 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.625 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.625 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.626 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.626 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.626 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.626 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.626 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.626 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.626 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.626 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.627 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.627 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.627 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.627 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.627 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.627 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.627 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.627 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.627 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.628 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.628 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.628 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.628 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.628 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.628 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.628 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.628 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.628 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.628 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.628 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.629 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.629 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.629 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.629 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.629 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.629 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.629 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.629 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.629 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.629 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.629 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.629 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.630 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.630 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.630 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.630 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.630 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.630 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.630 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.630 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.630 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.630 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.631 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.631 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.631 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.631 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.631 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.631 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.631 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.631 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.631 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.631 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.631 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.632 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.632 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.632 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.632 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.632 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.632 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.632 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.632 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.632 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.632 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.632 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.633 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.633 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.633 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.633 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.633 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.633 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.633 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.633 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.633 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.633 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.633 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.633 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.634 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.634 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.634 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.634 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.634 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.634 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.634 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.634 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.634 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.634 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.634 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.634 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.635 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.635 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.635 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.635 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.635 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.635 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.635 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.635 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.635 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.635 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.635 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.635 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.635 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.636 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.636 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.636 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.636 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.636 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.636 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.636 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.636 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.636 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.636 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.636 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.637 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.637 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.637 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.637 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.637 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.637 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.637 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.637 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.637 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.637 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.637 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.637 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.638 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.638 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.638 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.638 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.638 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.638 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.638 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.638 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.638 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.638 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Feb 19 20:07:40 compute-0 ceilometer_agent_ipmi[227737]: 2026-02-19 20:07:40.641 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Feb 19 20:07:40 compute-0 sudo[228178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnoonquzraeyupkzhxneblgqdvgwwhqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531660.6777766-633-270249058901396/AnsiballZ_find.py'
Feb 19 20:07:40 compute-0 sudo[228178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:41 compute-0 kepler[227990]: I0219 20:07:41.011948       1 watcher.go:83] Using in cluster k8s config
Feb 19 20:07:41 compute-0 kepler[227990]: I0219 20:07:41.012138       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Feb 19 20:07:41 compute-0 kepler[227990]: E0219 20:07:41.012347       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Feb 19 20:07:41 compute-0 kepler[227990]: I0219 20:07:41.015543       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Feb 19 20:07:41 compute-0 kepler[227990]: I0219 20:07:41.015703       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Feb 19 20:07:41 compute-0 kepler[227990]: I0219 20:07:41.019962       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Feb 19 20:07:41 compute-0 kepler[227990]: I0219 20:07:41.020116       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Feb 19 20:07:41 compute-0 kepler[227990]: I0219 20:07:41.026508       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Feb 19 20:07:41 compute-0 kepler[227990]: I0219 20:07:41.026668       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Feb 19 20:07:41 compute-0 kepler[227990]: I0219 20:07:41.026776       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Feb 19 20:07:41 compute-0 kepler[227990]: I0219 20:07:41.031427       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Feb 19 20:07:41 compute-0 kepler[227990]: I0219 20:07:41.031571       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Feb 19 20:07:41 compute-0 kepler[227990]: I0219 20:07:41.031669       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Feb 19 20:07:41 compute-0 kepler[227990]: I0219 20:07:41.031764       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Feb 19 20:07:41 compute-0 kepler[227990]: I0219 20:07:41.031860       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Feb 19 20:07:41 compute-0 kepler[227990]: I0219 20:07:41.031961       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Feb 19 20:07:41 compute-0 kepler[227990]: I0219 20:07:41.032140       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Feb 19 20:07:41 compute-0 kepler[227990]: I0219 20:07:41.032309       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Feb 19 20:07:41 compute-0 kepler[227990]: I0219 20:07:41.032446       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Feb 19 20:07:41 compute-0 kepler[227990]: I0219 20:07:41.032579       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Feb 19 20:07:41 compute-0 kepler[227990]: I0219 20:07:41.032770       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Feb 19 20:07:41 compute-0 kepler[227990]: I0219 20:07:41.033300       1 exporter.go:208] Started Kepler in 530.359953ms
Feb 19 20:07:41 compute-0 python3.9[228181]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 19 20:07:41 compute-0 sudo[228178]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:41 compute-0 sudo[228341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcfhoooilrxpckrsdyzvacjyztlidzyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531661.4738472-643-202207121709716/AnsiballZ_podman_container_info.py'
Feb 19 20:07:41 compute-0 sudo[228341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:42 compute-0 python3.9[228344]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Feb 19 20:07:42 compute-0 sudo[228341]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:42 compute-0 sudo[228506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbefbencnzwglrpymieglgghmcdqtdwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531662.4863093-651-133177526957989/AnsiballZ_podman_container_exec.py'
Feb 19 20:07:42 compute-0 sudo[228506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:43 compute-0 python3.9[228509]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 19 20:07:43 compute-0 systemd[1]: Started libpod-conmon-626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0.scope.
Feb 19 20:07:43 compute-0 podman[228510]: 2026-02-19 20:07:43.279821522 +0000 UTC m=+0.096087833 container exec 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 19 20:07:43 compute-0 podman[228510]: 2026-02-19 20:07:43.313122959 +0000 UTC m=+0.129389250 container exec_died 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 19 20:07:43 compute-0 systemd[1]: libpod-conmon-626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0.scope: Deactivated successfully.
Feb 19 20:07:43 compute-0 sudo[228506]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:43 compute-0 sudo[228701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zelabymyzhdiatvbooksftmqulydaxxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531663.558016-659-223878201509131/AnsiballZ_podman_container_exec.py'
Feb 19 20:07:43 compute-0 sudo[228701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:43 compute-0 podman[228664]: 2026-02-19 20:07:43.969099523 +0000 UTC m=+0.092735858 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 19 20:07:44 compute-0 python3.9[228708]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 19 20:07:44 compute-0 systemd[1]: Started libpod-conmon-626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0.scope.
Feb 19 20:07:44 compute-0 podman[228716]: 2026-02-19 20:07:44.295653034 +0000 UTC m=+0.138548138 container exec 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 20:07:44 compute-0 podman[228716]: 2026-02-19 20:07:44.330386007 +0000 UTC m=+0.173281141 container exec_died 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb 19 20:07:44 compute-0 systemd[1]: libpod-conmon-626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0.scope: Deactivated successfully.
Feb 19 20:07:44 compute-0 sudo[228701]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:44 compute-0 sudo[228904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yacegjmxxxgrkxykhnrlzyrdwrczzcpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531664.589255-667-22829299270147/AnsiballZ_file.py'
Feb 19 20:07:44 compute-0 sudo[228904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:45 compute-0 podman[228868]: 2026-02-19 20:07:45.005651388 +0000 UTC m=+0.096792466 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Feb 19 20:07:45 compute-0 python3.9[228914]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:07:45 compute-0 sudo[228904]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:45 compute-0 sudo[229064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebqsisofmpriicoghrddtdlxskmuzyli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531665.4539351-676-162692278298141/AnsiballZ_podman_container_info.py'
Feb 19 20:07:45 compute-0 sudo[229064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:45 compute-0 python3.9[229067]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Feb 19 20:07:46 compute-0 sudo[229064]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:46 compute-0 sudo[229229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnlummvhmtrxvzglvycjrojkjyzgxvts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531666.318299-684-222792991653749/AnsiballZ_podman_container_exec.py'
Feb 19 20:07:46 compute-0 sudo[229229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:46 compute-0 python3.9[229232]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 19 20:07:46 compute-0 systemd[1]: Started libpod-conmon-59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e.scope.
Feb 19 20:07:47 compute-0 podman[229233]: 2026-02-19 20:07:47.008723654 +0000 UTC m=+0.153727457 container exec 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127)
Feb 19 20:07:47 compute-0 podman[229233]: 2026-02-19 20:07:47.043616802 +0000 UTC m=+0.188620525 container exec_died 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 20:07:47 compute-0 systemd[1]: libpod-conmon-59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e.scope: Deactivated successfully.
Feb 19 20:07:47 compute-0 sudo[229229]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:47 compute-0 sudo[229411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbbvpqiwoxlvldmdeopigrzstqctgqba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531667.3172517-692-77718459596949/AnsiballZ_podman_container_exec.py'
Feb 19 20:07:47 compute-0 sudo[229411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:47 compute-0 python3.9[229414]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 19 20:07:47 compute-0 systemd[1]: Started libpod-conmon-59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e.scope.
Feb 19 20:07:47 compute-0 podman[229415]: 2026-02-19 20:07:47.975541315 +0000 UTC m=+0.133494730 container exec 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 19 20:07:48 compute-0 podman[229415]: 2026-02-19 20:07:48.006881251 +0000 UTC m=+0.164834666 container exec_died 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb 19 20:07:48 compute-0 systemd[1]: libpod-conmon-59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e.scope: Deactivated successfully.
Feb 19 20:07:48 compute-0 sudo[229411]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:48 compute-0 podman[229429]: 2026-02-19 20:07:48.078586076 +0000 UTC m=+0.131872409 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Feb 19 20:07:48 compute-0 sudo[229618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdwvugmogwpnlladpldmoickatvilsbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531668.2474756-700-141292881252811/AnsiballZ_file.py'
Feb 19 20:07:48 compute-0 sudo[229618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:48 compute-0 python3.9[229621]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:07:48 compute-0 sudo[229618]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:49 compute-0 sudo[229771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yunlzzprnurzxbktffhixfcxivkqelcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531669.0479362-709-198381754160559/AnsiballZ_podman_container_info.py'
Feb 19 20:07:49 compute-0 sudo[229771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:49 compute-0 python3.9[229774]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Feb 19 20:07:49 compute-0 sudo[229771]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:50 compute-0 sudo[229938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jynlseqmlpmrbgareacxpxjotekksyou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531669.8254015-717-166934518610907/AnsiballZ_podman_container_exec.py'
Feb 19 20:07:50 compute-0 sudo[229938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:50 compute-0 python3.9[229941]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 19 20:07:50 compute-0 systemd[1]: Started libpod-conmon-7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a.scope.
Feb 19 20:07:50 compute-0 podman[229942]: 2026-02-19 20:07:50.47262964 +0000 UTC m=+0.111505708 container exec 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 19 20:07:50 compute-0 podman[229942]: 2026-02-19 20:07:50.505261317 +0000 UTC m=+0.144137395 container exec_died 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, tcib_managed=true, io.buildah.version=1.43.0)
Feb 19 20:07:50 compute-0 systemd[1]: libpod-conmon-7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a.scope: Deactivated successfully.
Feb 19 20:07:50 compute-0 sudo[229938]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:51 compute-0 sudo[230123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwphdxazflfogyhvwegaqxlytiygzhll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531670.7519825-725-110189265289974/AnsiballZ_podman_container_exec.py'
Feb 19 20:07:51 compute-0 sudo[230123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:51 compute-0 python3.9[230126]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 19 20:07:51 compute-0 systemd[1]: Started libpod-conmon-7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a.scope.
Feb 19 20:07:51 compute-0 podman[230127]: 2026-02-19 20:07:51.390246394 +0000 UTC m=+0.114554935 container exec 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 19 20:07:51 compute-0 podman[230127]: 2026-02-19 20:07:51.422408446 +0000 UTC m=+0.146716947 container exec_died 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Feb 19 20:07:51 compute-0 systemd[1]: libpod-conmon-7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a.scope: Deactivated successfully.
Feb 19 20:07:51 compute-0 sudo[230123]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:51 compute-0 sudo[230308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vprdyloklmrzpvrppkqffkifdzvkbnum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531671.6230047-733-104554972458798/AnsiballZ_file.py'
Feb 19 20:07:51 compute-0 sudo[230308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:52 compute-0 python3.9[230311]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:07:52 compute-0 sudo[230308]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:52 compute-0 sudo[230461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rspbpsipzdojxdbwheheusydqziegvrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531672.389458-742-100866410989703/AnsiballZ_podman_container_info.py'
Feb 19 20:07:52 compute-0 sudo[230461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:52 compute-0 python3.9[230464]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Feb 19 20:07:52 compute-0 sudo[230461]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:53 compute-0 sudo[230626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsrkrgvpqqgczseeblughpmgrcbbfrgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531673.1703575-750-68997464356377/AnsiballZ_podman_container_exec.py'
Feb 19 20:07:53 compute-0 sudo[230626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:53 compute-0 python3.9[230629]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 19 20:07:53 compute-0 systemd[1]: Started libpod-conmon-fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad.scope.
Feb 19 20:07:53 compute-0 podman[230630]: 2026-02-19 20:07:53.750285088 +0000 UTC m=+0.095040890 container exec fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 19 20:07:53 compute-0 podman[230630]: 2026-02-19 20:07:53.781977035 +0000 UTC m=+0.126732827 container exec_died fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 19 20:07:53 compute-0 systemd[1]: libpod-conmon-fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad.scope: Deactivated successfully.
Feb 19 20:07:53 compute-0 sudo[230626]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:54 compute-0 sudo[230812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehxmnhinelfsiobbgesfktxzdqajstha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531674.0026867-758-46638199341625/AnsiballZ_podman_container_exec.py'
Feb 19 20:07:54 compute-0 sudo[230812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:54 compute-0 python3.9[230815]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 19 20:07:54 compute-0 systemd[1]: Started libpod-conmon-fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad.scope.
Feb 19 20:07:54 compute-0 podman[230816]: 2026-02-19 20:07:54.643151493 +0000 UTC m=+0.127884283 container exec fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 19 20:07:54 compute-0 podman[230816]: 2026-02-19 20:07:54.676571224 +0000 UTC m=+0.161303954 container exec_died fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 19 20:07:54 compute-0 systemd[1]: libpod-conmon-fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad.scope: Deactivated successfully.
Feb 19 20:07:54 compute-0 sudo[230812]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:55 compute-0 sudo[230996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhxocnrhrglrdsgnlrjquqxhplvbfeiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531674.9355743-766-12313719172735/AnsiballZ_file.py'
Feb 19 20:07:55 compute-0 sudo[230996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:55 compute-0 python3.9[230999]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:07:55 compute-0 sudo[230996]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:55 compute-0 sudo[231149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlijdgqxzcqrqfpjrrcrevtyizdoxnsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531675.6455202-775-188590590261441/AnsiballZ_podman_container_info.py'
Feb 19 20:07:55 compute-0 sudo[231149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:56 compute-0 python3.9[231152]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Feb 19 20:07:56 compute-0 sudo[231149]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:56 compute-0 sudo[231314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdpfgyuxjojjvltvzwkrbuuezszagebo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531676.3792655-783-232008129608417/AnsiballZ_podman_container_exec.py'
Feb 19 20:07:56 compute-0 sudo[231314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:56 compute-0 python3.9[231317]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 19 20:07:57 compute-0 systemd[1]: Started libpod-conmon-9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2.scope.
Feb 19 20:07:57 compute-0 podman[231318]: 2026-02-19 20:07:57.092378863 +0000 UTC m=+0.117289471 container exec 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 19 20:07:57 compute-0 podman[231318]: 2026-02-19 20:07:57.123059758 +0000 UTC m=+0.147970336 container exec_died 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 19 20:07:57 compute-0 systemd[1]: libpod-conmon-9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2.scope: Deactivated successfully.
Feb 19 20:07:57 compute-0 sudo[231314]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:57 compute-0 sudo[231496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esobvtivkcipgevxvrvmamwrbjevqskp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531677.4560294-791-140265839993879/AnsiballZ_podman_container_exec.py'
Feb 19 20:07:57 compute-0 sudo[231496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:58 compute-0 python3.9[231499]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 19 20:07:58 compute-0 systemd[1]: Started libpod-conmon-9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2.scope.
Feb 19 20:07:58 compute-0 podman[231500]: 2026-02-19 20:07:58.239465665 +0000 UTC m=+0.221580211 container exec 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 19 20:07:58 compute-0 podman[231500]: 2026-02-19 20:07:58.299071709 +0000 UTC m=+0.281186215 container exec_died 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 19 20:07:58 compute-0 sudo[231496]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:58 compute-0 systemd[1]: libpod-conmon-9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2.scope: Deactivated successfully.
Feb 19 20:07:58 compute-0 sudo[231677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuebojjmsgnzwdgajyltiddwehbzvxky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531678.680016-799-212450611701919/AnsiballZ_file.py'
Feb 19 20:07:58 compute-0 sudo[231677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:59 compute-0 python3.9[231680]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:07:59 compute-0 sudo[231677]: pam_unix(sudo:session): session closed for user root
Feb 19 20:07:59 compute-0 podman[204724]: time="2026-02-19T20:07:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:07:59 compute-0 podman[231804]: 2026-02-19 20:07:59.747352375 +0000 UTC m=+0.060128432 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 19 20:07:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:07:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28008 "" "Go-http-client/1.1"
Feb 19 20:07:59 compute-0 sudo[231854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bncwlyerafvmvyxjvtadmwewxhfxljev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531679.4462528-808-7615789549828/AnsiballZ_podman_container_info.py'
Feb 19 20:07:59 compute-0 sudo[231854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:07:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:07:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3834 "" "Go-http-client/1.1"
Feb 19 20:07:59 compute-0 python3.9[231857]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Feb 19 20:08:00 compute-0 sudo[231854]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:00 compute-0 sudo[232021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyehjflrvjeayekkdsfqyuhxxxsaoxjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531680.2483075-816-96809421545820/AnsiballZ_podman_container_exec.py'
Feb 19 20:08:00 compute-0 sudo[232021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:00 compute-0 python3.9[232024]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 19 20:08:01 compute-0 systemd[1]: Started libpod-conmon-3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692.scope.
Feb 19 20:08:01 compute-0 podman[232025]: 2026-02-19 20:08:01.050868647 +0000 UTC m=+0.198021070 container exec 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.expose-services=, managed_by=edpm_ansible, config_id=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9/ubi-minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, org.opencontainers.image.created=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, io.openshift.tags=minimal rhel9)
Feb 19 20:08:01 compute-0 podman[232050]: 2026-02-19 20:08:01.135642163 +0000 UTC m=+0.072660727 container exec_died 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, version=9.7, architecture=x86_64, container_name=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, build-date=2026-02-05T04:57:10Z, config_id=openstack_network_exporter)
Feb 19 20:08:01 compute-0 podman[232025]: 2026-02-19 20:08:01.158970847 +0000 UTC m=+0.306123280 container exec_died 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, name=ubi9/ubi-minimal, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, org.opencontainers.image.created=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-02-05T04:57:10Z, distribution-scope=public, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.expose-services=, release=1770267347, com.redhat.component=ubi9-minimal-container, vcs-type=git)
Feb 19 20:08:01 compute-0 systemd[1]: libpod-conmon-3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692.scope: Deactivated successfully.
Feb 19 20:08:01 compute-0 podman[232040]: 2026-02-19 20:08:01.202233537 +0000 UTC m=+0.149748771 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, managed_by=edpm_ansible, version=9.7, distribution-scope=public, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, vcs-type=git, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-05T04:57:10Z, build-date=2026-02-05T04:57:10Z, release=1770267347, com.redhat.component=ubi9-minimal-container)
Feb 19 20:08:01 compute-0 sudo[232021]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:01 compute-0 openstack_network_exporter[207898]: ERROR   20:08:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:08:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:08:01 compute-0 openstack_network_exporter[207898]: ERROR   20:08:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:08:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:08:01 compute-0 sudo[232227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luyxxoowgqxfvrqqxzsfttihpxupwyti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531681.4357402-824-263072238554220/AnsiballZ_podman_container_exec.py'
Feb 19 20:08:01 compute-0 sudo[232227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:01 compute-0 python3.9[232230]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 19 20:08:02 compute-0 systemd[1]: Started libpod-conmon-3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692.scope.
Feb 19 20:08:02 compute-0 podman[232231]: 2026-02-19 20:08:02.112789849 +0000 UTC m=+0.152013382 container exec 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.expose-services=, version=9.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., release=1770267347, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-05T04:57:10Z, distribution-scope=public, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, architecture=x86_64)
Feb 19 20:08:02 compute-0 podman[232251]: 2026-02-19 20:08:02.189287745 +0000 UTC m=+0.063595641 container exec_died 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_id=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.7, io.buildah.version=1.33.7, container_name=openstack_network_exporter, name=ubi9/ubi-minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2026-02-05T04:57:10Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 19 20:08:02 compute-0 podman[232231]: 2026-02-19 20:08:02.240473296 +0000 UTC m=+0.279696839 container exec_died 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.expose-services=, io.buildah.version=1.33.7, release=1770267347, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, build-date=2026-02-05T04:57:10Z, distribution-scope=public, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter)
Feb 19 20:08:02 compute-0 systemd[1]: libpod-conmon-3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692.scope: Deactivated successfully.
Feb 19 20:08:02 compute-0 sudo[232227]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:02 compute-0 sudo[232413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hueakobacgtsejcggwyekcktsrliiuub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531682.562325-832-61521390242476/AnsiballZ_file.py'
Feb 19 20:08:02 compute-0 sudo[232413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:03 compute-0 python3.9[232416]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:08:03 compute-0 sudo[232413]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:03 compute-0 sudo[232566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnsehgajfajgaihbxpzdeupbvsmwhrtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531683.393909-841-33973288698182/AnsiballZ_podman_container_info.py'
Feb 19 20:08:03 compute-0 sudo[232566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:03 compute-0 python3.9[232569]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Feb 19 20:08:03 compute-0 sudo[232566]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:04 compute-0 sudo[232731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-losvkhmhlesqdjmgbobwsjlglsljzbjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531684.1016974-849-142472155456497/AnsiballZ_podman_container_exec.py'
Feb 19 20:08:04 compute-0 sudo[232731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:04 compute-0 python3.9[232734]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 19 20:08:04 compute-0 systemd[1]: Started libpod-conmon-ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e.scope.
Feb 19 20:08:04 compute-0 podman[232735]: 2026-02-19 20:08:04.683029985 +0000 UTC m=+0.094224834 container exec ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Feb 19 20:08:04 compute-0 podman[232735]: 2026-02-19 20:08:04.717390326 +0000 UTC m=+0.128585215 container exec_died ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Feb 19 20:08:04 compute-0 podman[232750]: 2026-02-19 20:08:04.7496188 +0000 UTC m=+0.076101064 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 19 20:08:04 compute-0 systemd[1]: libpod-conmon-ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e.scope: Deactivated successfully.
Feb 19 20:08:04 compute-0 sudo[232731]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:05 compute-0 sudo[232933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avwrbanxwkwmshzrrcsfarotryrkoeaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531685.0159645-857-15327821677912/AnsiballZ_podman_container_exec.py'
Feb 19 20:08:05 compute-0 sudo[232933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:05 compute-0 python3.9[232936]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 19 20:08:05 compute-0 systemd[1]: Started libpod-conmon-ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e.scope.
Feb 19 20:08:05 compute-0 podman[232937]: 2026-02-19 20:08:05.674786831 +0000 UTC m=+0.091989404 container exec ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb 19 20:08:05 compute-0 podman[232937]: 2026-02-19 20:08:05.705081854 +0000 UTC m=+0.122284397 container exec_died ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb 19 20:08:05 compute-0 systemd[1]: libpod-conmon-ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e.scope: Deactivated successfully.
Feb 19 20:08:05 compute-0 sudo[232933]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:06 compute-0 sudo[233115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtomhpwjzfxusygsrvzmnnbqjyglbuvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531685.9244475-865-55591734042984/AnsiballZ_file.py'
Feb 19 20:08:06 compute-0 sudo[233115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:06 compute-0 python3.9[233118]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:08:06 compute-0 sudo[233115]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:06 compute-0 sudo[233268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lncrgwactehwxizqfnddyxityyrassdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531686.5999715-874-82514494335605/AnsiballZ_podman_container_info.py'
Feb 19 20:08:06 compute-0 sudo[233268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:07 compute-0 python3.9[233271]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Feb 19 20:08:07 compute-0 sudo[233268]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:07 compute-0 sudo[233434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvggvcyknahtrccpydvyvhqlyizjqqrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531687.3851311-882-85331287065663/AnsiballZ_podman_container_exec.py'
Feb 19 20:08:07 compute-0 sudo[233434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:07 compute-0 python3.9[233437]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 19 20:08:07 compute-0 systemd[1]: Started libpod-conmon-9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce.scope.
Feb 19 20:08:08 compute-0 podman[233438]: 2026-02-19 20:08:08.00794747 +0000 UTC m=+0.086696449 container exec 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, version=9.4, release=1214.1726694543, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, io.openshift.tags=base rhel9, config_id=kepler, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9, vcs-type=git)
Feb 19 20:08:08 compute-0 podman[233438]: 2026-02-19 20:08:08.038924944 +0000 UTC m=+0.117673933 container exec_died 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, container_name=kepler, io.openshift.tags=base rhel9, config_id=kepler, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release-0.7.12=, version=9.4, architecture=x86_64, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Feb 19 20:08:08 compute-0 systemd[1]: libpod-conmon-9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce.scope: Deactivated successfully.
Feb 19 20:08:08 compute-0 sudo[233434]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:08 compute-0 sudo[233622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kapukbiaufabnrypsekycexalnmlffgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531688.309858-890-76911057470308/AnsiballZ_podman_container_exec.py'
Feb 19 20:08:08 compute-0 sudo[233622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:08 compute-0 python3.9[233625]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 19 20:08:08 compute-0 systemd[1]: Started libpod-conmon-9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce.scope.
Feb 19 20:08:08 compute-0 podman[233626]: 2026-02-19 20:08:08.908387413 +0000 UTC m=+0.095164804 container exec 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, config_id=kepler, release-0.7.12=, version=9.4, maintainer=Red Hat, Inc., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, distribution-scope=public, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Feb 19 20:08:08 compute-0 podman[233626]: 2026-02-19 20:08:08.944117137 +0000 UTC m=+0.130894558 container exec_died 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, name=ubi9, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.buildah.version=1.29.0, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, io.openshift.expose-services=, version=9.4, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., release=1214.1726694543, io.openshift.tags=base rhel9, managed_by=edpm_ansible)
Feb 19 20:08:08 compute-0 systemd[1]: libpod-conmon-9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce.scope: Deactivated successfully.
Feb 19 20:08:08 compute-0 sudo[233622]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:09 compute-0 podman[233749]: 2026-02-19 20:08:09.396401644 +0000 UTC m=+0.076715805 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Feb 19 20:08:09 compute-0 sudo[233825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctwvvsbfuyyjpemfytzhsioahgbmetox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531689.1733074-898-12265564141655/AnsiballZ_file.py'
Feb 19 20:08:09 compute-0 sudo[233825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:09 compute-0 python3.9[233828]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:08:09 compute-0 sudo[233825]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:10 compute-0 sshd-session[233522]: Received disconnect from 103.213.238.91 port 58378:11: Bye Bye [preauth]
Feb 19 20:08:10 compute-0 sshd-session[233522]: Disconnected from authenticating user root 103.213.238.91 port 58378 [preauth]
Feb 19 20:08:10 compute-0 sudo[233978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kievxsfslilltizqeabuznaducfmaxrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531690.014304-907-5188489312496/AnsiballZ_file.py'
Feb 19 20:08:10 compute-0 sudo[233978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:10 compute-0 python3.9[233981]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:08:10 compute-0 sudo[233978]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:11 compute-0 sudo[234147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rswtsgwesxaeimbexfqanwcpafghuwcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531690.7841628-915-225760999423237/AnsiballZ_stat.py'
Feb 19 20:08:11 compute-0 podman[234105]: 2026-02-19 20:08:11.094360892 +0000 UTC m=+0.072895884 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, architecture=x86_64, release=1214.1726694543, vcs-type=git, io.buildah.version=1.29.0, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=)
Feb 19 20:08:11 compute-0 sudo[234147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:11 compute-0 python3.9[234153]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/kepler.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:08:11 compute-0 sudo[234147]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:11 compute-0 sudo[234274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqkmhpcdoqtecexsbjahvysqwkjuwyrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531690.7841628-915-225760999423237/AnsiballZ_copy.py'
Feb 19 20:08:11 compute-0 sudo[234274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:11 compute-0 python3.9[234277]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/kepler.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1771531690.7841628-915-225760999423237/.source.yaml _original_basename=firewall.yaml follow=False checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:08:12 compute-0 sudo[234274]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:12 compute-0 sudo[234427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uanynowjciwgensrbzqwpekmzitfuvwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531692.282853-931-21947261965690/AnsiballZ_file.py'
Feb 19 20:08:12 compute-0 sudo[234427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:12 compute-0 python3.9[234430]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:08:12 compute-0 sudo[234427]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:13 compute-0 sudo[234580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efdqkpuwxyavyzgsrtwpzdnnivrztlxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531693.0264993-939-132899040563954/AnsiballZ_stat.py'
Feb 19 20:08:13 compute-0 sudo[234580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:13 compute-0 python3.9[234583]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:08:13 compute-0 sudo[234580]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:13 compute-0 sudo[234659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwylhqcleasrvdgqntwnhrxkkfmerhik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531693.0264993-939-132899040563954/AnsiballZ_file.py'
Feb 19 20:08:13 compute-0 sudo[234659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:13 compute-0 python3.9[234662]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:08:14 compute-0 sudo[234659]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:14 compute-0 podman[234663]: 2026-02-19 20:08:14.078985863 +0000 UTC m=+0.072704518 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 19 20:08:14 compute-0 sudo[234835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzhrpkvxkwwzycpaddjwyuigpxxnuxxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531694.1922545-951-65318339303559/AnsiballZ_stat.py'
Feb 19 20:08:14 compute-0 sudo[234835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:14 compute-0 python3.9[234838]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:08:14 compute-0 sudo[234835]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:14 compute-0 sudo[234914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clsupacuseqnlttkupgnajlvjgupeymu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531694.1922545-951-65318339303559/AnsiballZ_file.py'
Feb 19 20:08:14 compute-0 sudo[234914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:15 compute-0 python3.9[234917]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.5chjznfy recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:08:15 compute-0 sudo[234914]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:15 compute-0 podman[234918]: 2026-02-19 20:08:15.252560797 +0000 UTC m=+0.088780544 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Feb 19 20:08:15 compute-0 sudo[235089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjeiipwiuhircidqoxtvivijywbccfsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531695.3580966-963-43154376249052/AnsiballZ_stat.py'
Feb 19 20:08:15 compute-0 sudo[235089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:15 compute-0 python3.9[235092]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:08:15 compute-0 sudo[235089]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:16 compute-0 sudo[235168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xutcizvnwljmcskbrysqpoumufkemsua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531695.3580966-963-43154376249052/AnsiballZ_file.py'
Feb 19 20:08:16 compute-0 sudo[235168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:16 compute-0 python3.9[235171]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:08:16 compute-0 sudo[235168]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:16 compute-0 sudo[235321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbtebvagacxbmfptmxbieiifmnqhbkbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531696.6521223-976-78290307499070/AnsiballZ_command.py'
Feb 19 20:08:16 compute-0 sudo[235321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:17 compute-0 python3.9[235324]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 20:08:17 compute-0 sudo[235321]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:17 compute-0 sudo[235475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kljmccdtvkfphuzemtftahlgugepjrro ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771531697.3937497-984-82222158386185/AnsiballZ_edpm_nftables_from_files.py'
Feb 19 20:08:17 compute-0 sudo[235475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:18 compute-0 python3[235478]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb 19 20:08:18 compute-0 sudo[235475]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:18 compute-0 podman[235479]: 2026-02-19 20:08:18.240545113 +0000 UTC m=+0.078394487 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 20:08:18 compute-0 sudo[235653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ricczfkgtvrvjzcydfimazupasyndaaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531698.3378804-992-100494417583156/AnsiballZ_stat.py'
Feb 19 20:08:18 compute-0 sudo[235653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:18 compute-0 python3.9[235656]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:08:18 compute-0 sudo[235653]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:19 compute-0 sudo[235732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xffzntkosggafprolowbzxigcazanmfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531698.3378804-992-100494417583156/AnsiballZ_file.py'
Feb 19 20:08:19 compute-0 sudo[235732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:19 compute-0 python3.9[235735]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:08:19 compute-0 sudo[235732]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:19 compute-0 sudo[235886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmpnxorgvrmxkxtpbqximjrmzmygqtlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531699.5762532-1004-280962287108074/AnsiballZ_stat.py'
Feb 19 20:08:19 compute-0 sudo[235886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:20 compute-0 python3.9[235889]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:08:20 compute-0 sudo[235886]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:20 compute-0 sudo[235965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkxuktjwvcrswzgruifanauajpzgpoqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531699.5762532-1004-280962287108074/AnsiballZ_file.py'
Feb 19 20:08:20 compute-0 sudo[235965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:20 compute-0 python3.9[235968]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:08:20 compute-0 sudo[235965]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:21 compute-0 sudo[236118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxgfrlhqkameggxpmiivwbwjsqaifceu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531700.978486-1016-85902809924320/AnsiballZ_stat.py'
Feb 19 20:08:21 compute-0 sudo[236118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:21 compute-0 python3.9[236121]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:08:21 compute-0 sudo[236118]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:21 compute-0 sudo[236197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dputeyxbmxtzaptqjssjizjwwuqylnmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531700.978486-1016-85902809924320/AnsiballZ_file.py'
Feb 19 20:08:21 compute-0 sudo[236197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:21 compute-0 python3.9[236200]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:08:22 compute-0 sudo[236197]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:22 compute-0 sudo[236350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfafkfwegqdvwcbffncnepwxwoyengcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531702.2203174-1028-78920263659448/AnsiballZ_stat.py'
Feb 19 20:08:22 compute-0 sudo[236350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:22 compute-0 python3.9[236353]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:08:22 compute-0 sudo[236350]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:23 compute-0 sudo[236429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djtvnlfalnibsywimyblqtmyaxfttktr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531702.2203174-1028-78920263659448/AnsiballZ_file.py'
Feb 19 20:08:23 compute-0 sudo[236429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:23 compute-0 python3.9[236432]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:08:23 compute-0 sudo[236429]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:23 compute-0 sudo[236582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iruxklsypzwhrwpqlbvlwlgxxzppmjav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531703.5099456-1040-185972671603417/AnsiballZ_stat.py'
Feb 19 20:08:23 compute-0 sudo[236582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:24 compute-0 python3.9[236585]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:08:24 compute-0 sudo[236582]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:24 compute-0 nova_compute[188777]: 2026-02-19 20:08:24.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:08:24 compute-0 sudo[236708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bznybvxbcflzjiiymmpelmhpezbznkiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531703.5099456-1040-185972671603417/AnsiballZ_copy.py'
Feb 19 20:08:24 compute-0 sudo[236708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:24 compute-0 python3.9[236711]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771531703.5099456-1040-185972671603417/.source.nft follow=False _original_basename=ruleset.j2 checksum=b82fbd2c71bb7c36c630c2301913f0f42fd2e7ce backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:08:24 compute-0 sudo[236708]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:25 compute-0 nova_compute[188777]: 2026-02-19 20:08:25.260 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:08:25 compute-0 nova_compute[188777]: 2026-02-19 20:08:25.286 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:08:25 compute-0 nova_compute[188777]: 2026-02-19 20:08:25.313 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:08:25 compute-0 nova_compute[188777]: 2026-02-19 20:08:25.314 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:08:25 compute-0 nova_compute[188777]: 2026-02-19 20:08:25.314 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:08:25 compute-0 nova_compute[188777]: 2026-02-19 20:08:25.315 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:08:25 compute-0 sudo[236861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-toixxznzpfyedyfgtmpyghjdzmyzptxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531704.9938378-1055-83758336153538/AnsiballZ_file.py'
Feb 19 20:08:25 compute-0 sudo[236861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:25 compute-0 nova_compute[188777]: 2026-02-19 20:08:25.622 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:08:25 compute-0 nova_compute[188777]: 2026-02-19 20:08:25.623 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5603MB free_disk=72.30548477172852GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:08:25 compute-0 nova_compute[188777]: 2026-02-19 20:08:25.623 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:08:25 compute-0 nova_compute[188777]: 2026-02-19 20:08:25.623 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:08:25 compute-0 python3.9[236864]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:08:25 compute-0 sudo[236861]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:25 compute-0 nova_compute[188777]: 2026-02-19 20:08:25.681 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:08:25 compute-0 nova_compute[188777]: 2026-02-19 20:08:25.681 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:08:25 compute-0 nova_compute[188777]: 2026-02-19 20:08:25.708 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:08:25 compute-0 nova_compute[188777]: 2026-02-19 20:08:25.722 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:08:25 compute-0 nova_compute[188777]: 2026-02-19 20:08:25.723 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:08:25 compute-0 nova_compute[188777]: 2026-02-19 20:08:25.724 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:08:26 compute-0 sudo[237016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smvkxmwwhjoptjrmpvqnqgthleswqcjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531705.850351-1063-222899877105550/AnsiballZ_command.py'
Feb 19 20:08:26 compute-0 sudo[237016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:26 compute-0 python3.9[237019]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 20:08:26 compute-0 sudo[237016]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:27 compute-0 sudo[237172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csexxheypgcxpnwgbswawvoehuupifsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531706.6536245-1071-191601517951009/AnsiballZ_blockinfile.py'
Feb 19 20:08:27 compute-0 sudo[237172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:27 compute-0 sshd-session[236945]: Invalid user minecraft from 103.154.77.48 port 38760
Feb 19 20:08:27 compute-0 python3.9[237175]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:08:27 compute-0 sudo[237172]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:27 compute-0 sshd-session[236945]: Received disconnect from 103.154.77.48 port 38760:11: Bye Bye [preauth]
Feb 19 20:08:27 compute-0 sshd-session[236945]: Disconnected from invalid user minecraft 103.154.77.48 port 38760 [preauth]
Feb 19 20:08:27 compute-0 nova_compute[188777]: 2026-02-19 20:08:27.702 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:08:27 compute-0 nova_compute[188777]: 2026-02-19 20:08:27.702 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:08:27 compute-0 nova_compute[188777]: 2026-02-19 20:08:27.702 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:08:27 compute-0 nova_compute[188777]: 2026-02-19 20:08:27.703 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:08:28 compute-0 sudo[237325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgfhogbmryspniiqquoapndjiieokqfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531707.6544812-1080-5021942770407/AnsiballZ_command.py'
Feb 19 20:08:28 compute-0 sudo[237325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:28 compute-0 python3.9[237328]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 20:08:28 compute-0 nova_compute[188777]: 2026-02-19 20:08:28.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:08:28 compute-0 nova_compute[188777]: 2026-02-19 20:08:28.265 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:08:28 compute-0 nova_compute[188777]: 2026-02-19 20:08:28.265 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 19 20:08:28 compute-0 nova_compute[188777]: 2026-02-19 20:08:28.280 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 19 20:08:28 compute-0 nova_compute[188777]: 2026-02-19 20:08:28.280 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:08:28 compute-0 nova_compute[188777]: 2026-02-19 20:08:28.280 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:08:28 compute-0 sudo[237325]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:28 compute-0 sudo[237479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysqggrwqfdscfcecjnvmeulowwxshelk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531708.4841866-1088-2327384606144/AnsiballZ_stat.py'
Feb 19 20:08:28 compute-0 sudo[237479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:28 compute-0 python3.9[237482]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 19 20:08:28 compute-0 sudo[237479]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:29 compute-0 sudo[237634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjhpetmddajgyillzstfqohhgjqecpxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531709.1834624-1096-231158525993187/AnsiballZ_command.py'
Feb 19 20:08:29 compute-0 sudo[237634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:29 compute-0 python3.9[237637]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 20:08:29 compute-0 podman[204724]: time="2026-02-19T20:08:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:08:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:08:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28006 "" "Go-http-client/1.1"
Feb 19 20:08:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:08:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3835 "" "Go-http-client/1.1"
Feb 19 20:08:29 compute-0 sudo[237634]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:29 compute-0 podman[237641]: 2026-02-19 20:08:29.927772943 +0000 UTC m=+0.096328851 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 19 20:08:30 compute-0 nova_compute[188777]: 2026-02-19 20:08:30.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:08:30 compute-0 sudo[237812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hypadoeheqksgpfbiarybmajocueejah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531710.0169368-1104-62392001492350/AnsiballZ_file.py'
Feb 19 20:08:30 compute-0 sudo[237812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:08:30.416 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:08:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:08:30.416 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:08:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:08:30.416 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:08:30 compute-0 python3.9[237815]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:08:30 compute-0 sudo[237812]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:31 compute-0 sshd-session[216726]: Connection closed by 192.168.122.30 port 37808
Feb 19 20:08:31 compute-0 sshd-session[216712]: pam_unix(sshd:session): session closed for user zuul
Feb 19 20:08:31 compute-0 systemd[1]: session-26.scope: Deactivated successfully.
Feb 19 20:08:31 compute-0 systemd[1]: session-26.scope: Consumed 1min 20.392s CPU time.
Feb 19 20:08:31 compute-0 systemd-logind[810]: Session 26 logged out. Waiting for processes to exit.
Feb 19 20:08:31 compute-0 systemd-logind[810]: Removed session 26.
Feb 19 20:08:31 compute-0 podman[237840]: 2026-02-19 20:08:31.391718092 +0000 UTC m=+0.074420303 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, io.buildah.version=1.33.7, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, version=9.7, config_id=openstack_network_exporter, vcs-type=git, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347, distribution-scope=public, build-date=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Feb 19 20:08:31 compute-0 openstack_network_exporter[207898]: ERROR   20:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:08:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:08:31 compute-0 openstack_network_exporter[207898]: ERROR   20:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:08:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:08:35 compute-0 podman[237859]: 2026-02-19 20:08:35.416291984 +0000 UTC m=+0.102850866 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 19 20:08:36 compute-0 sshd-session[237877]: Accepted publickey for zuul from 192.168.122.30 port 38438 ssh2: ECDSA SHA256:U7+XUhHIIKxaxeCtrtx4n7poU9CMVA2TmDaaiHbw4x0
Feb 19 20:08:36 compute-0 systemd-logind[810]: New session 27 of user zuul.
Feb 19 20:08:36 compute-0 systemd[1]: Started Session 27 of User zuul.
Feb 19 20:08:36 compute-0 sshd-session[237877]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 19 20:08:37 compute-0 python3.9[238030]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 19 20:08:38 compute-0 sudo[238184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oywpuhcyiyjwifgjplmjnvzgeswhwjmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531717.9041107-29-238752093950746/AnsiballZ_systemd.py'
Feb 19 20:08:38 compute-0 sudo[238184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:38 compute-0 python3.9[238187]: ansible-ansible.builtin.systemd Invoked with name=rsyslog daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None masked=None
Feb 19 20:08:38 compute-0 sudo[238184]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:39 compute-0 sudo[238338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agnbhgqdlibymrjfwukgjipjlmqxedhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531719.069113-37-263862908139015/AnsiballZ_setup.py'
Feb 19 20:08:39 compute-0 sudo[238338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:39 compute-0 python3.9[238341]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 19 20:08:40 compute-0 sudo[238338]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:40 compute-0 podman[238350]: 2026-02-19 20:08:40.12363034 +0000 UTC m=+0.086501552 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Feb 19 20:08:40 compute-0 sudo[238442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apfgzoywcythxwbrcxdcodkjhgeggkkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531719.069113-37-263862908139015/AnsiballZ_dnf.py'
Feb 19 20:08:40 compute-0 sudo[238442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:40 compute-0 python3.9[238445]: ansible-ansible.legacy.dnf Invoked with name=['rsyslog-openssl'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 19 20:08:41 compute-0 podman[238447]: 2026-02-19 20:08:41.381429019 +0000 UTC m=+0.072635454 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, container_name=kepler, io.openshift.tags=base rhel9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, vcs-type=git, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1214.1726694543, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, managed_by=edpm_ansible)
Feb 19 20:08:43 compute-0 sudo[238442]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:43 compute-0 sudo[238620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czefbbtryemobbeqpaeovpxiukqqiuyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531723.2944717-49-122782059211202/AnsiballZ_stat.py'
Feb 19 20:08:43 compute-0 sudo[238620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:43 compute-0 python3.9[238623]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/rsyslog/ca-openshift.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:08:43 compute-0 sudo[238620]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:44 compute-0 podman[238671]: 2026-02-19 20:08:44.441045604 +0000 UTC m=+0.129437178 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 19 20:08:44 compute-0 sudo[238768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbhitkkvukpqbyolsvrxjxucxkofkzhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531723.2944717-49-122782059211202/AnsiballZ_copy.py'
Feb 19 20:08:44 compute-0 sudo[238768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:44 compute-0 python3.9[238771]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/rsyslog/ca-openshift.crt mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1771531723.2944717-49-122782059211202/.source.crt _original_basename=ca-openshift.crt follow=False checksum=1d88bab26da5c85710a770c705f3555781bf2a38 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:08:44 compute-0 sudo[238768]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:45 compute-0 podman[238866]: 2026-02-19 20:08:45.421931082 +0000 UTC m=+0.107602992 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260216, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0)
Feb 19 20:08:45 compute-0 sudo[238941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrwbrpuwytszinswulsdkpmzmpqkgqxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531724.982264-64-85477898990572/AnsiballZ_file.py'
Feb 19 20:08:45 compute-0 sudo[238941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:45 compute-0 python3.9[238944]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/rsyslog.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:08:45 compute-0 sudo[238941]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:46 compute-0 sudo[239094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seljjiagahscfvigxhfghyujsyepiedl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531725.9325302-72-136082391986216/AnsiballZ_stat.py'
Feb 19 20:08:46 compute-0 sudo[239094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:46 compute-0 python3.9[239097]: ansible-ansible.legacy.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 19 20:08:46 compute-0 sudo[239094]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:46 compute-0 sudo[239218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igqeshcoqxvltysujxxmixdwbikpwacc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531725.9325302-72-136082391986216/AnsiballZ_copy.py'
Feb 19 20:08:46 compute-0 sudo[239218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:47 compute-0 python3.9[239221]: ansible-ansible.legacy.copy Invoked with dest=/etc/rsyslog.d/10-telemetry.conf mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1771531725.9325302-72-136082391986216/.source.conf _original_basename=10-telemetry.conf follow=False checksum=76865d9dd4bf9cd322a47065c046bcac194645ab backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 19 20:08:47 compute-0 sudo[239218]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:47 compute-0 sudo[239371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkalzxhlnrguolstfprndnzzowqjpnlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771531727.357452-87-122982142258207/AnsiballZ_systemd.py'
Feb 19 20:08:47 compute-0 sudo[239371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:08:48 compute-0 python3.9[239374]: ansible-ansible.builtin.systemd Invoked with name=rsyslog.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 19 20:08:48 compute-0 systemd[1]: Stopping System Logging Service...
Feb 19 20:08:48 compute-0 rsyslogd[1014]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1014" x-info="https://www.rsyslog.com"] exiting on signal 15.
Feb 19 20:08:48 compute-0 systemd[1]: rsyslog.service: Deactivated successfully.
Feb 19 20:08:48 compute-0 systemd[1]: Stopped System Logging Service.
Feb 19 20:08:48 compute-0 systemd[1]: rsyslog.service: Consumed 4.198s CPU time, 7.6M memory peak, read 0B from disk, written 6.0M to disk.
Feb 19 20:08:48 compute-0 systemd[1]: Starting System Logging Service...
Feb 19 20:08:48 compute-0 rsyslogd[239379]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="239379" x-info="https://www.rsyslog.com"] start
Feb 19 20:08:48 compute-0 systemd[1]: Started System Logging Service.
Feb 19 20:08:48 compute-0 rsyslogd[239379]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 19 20:08:48 compute-0 rsyslogd[239379]: Warning: Certificate file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2330 ]
Feb 19 20:08:48 compute-0 rsyslogd[239379]: Warning: Key file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2331 ]
Feb 19 20:08:48 compute-0 rsyslogd[239379]: nsd_ossl: TLS Connection initiated with remote syslog server '172.17.0.80'. [v8.2510.0-2.el9]
Feb 19 20:08:48 compute-0 podman[239378]: 2026-02-19 20:08:48.446445853 +0000 UTC m=+0.131633135 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 19 20:08:48 compute-0 sudo[239371]: pam_unix(sudo:session): session closed for user root
Feb 19 20:08:48 compute-0 rsyslogd[239379]: nsd_ossl: Information, no shared curve between syslog client '172.17.0.80' and server [v8.2510.0-2.el9]
Feb 19 20:08:48 compute-0 sshd-session[237880]: Connection closed by 192.168.122.30 port 38438
Feb 19 20:08:48 compute-0 sshd-session[237877]: pam_unix(sshd:session): session closed for user zuul
Feb 19 20:08:48 compute-0 systemd[1]: session-27.scope: Deactivated successfully.
Feb 19 20:08:48 compute-0 systemd[1]: session-27.scope: Consumed 9.328s CPU time.
Feb 19 20:08:48 compute-0 systemd-logind[810]: Session 27 logged out. Waiting for processes to exit.
Feb 19 20:08:48 compute-0 systemd-logind[810]: Removed session 27.
Feb 19 20:08:59 compute-0 podman[204724]: time="2026-02-19T20:08:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:08:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:08:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28006 "" "Go-http-client/1.1"
Feb 19 20:08:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:08:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3840 "" "Go-http-client/1.1"
Feb 19 20:09:00 compute-0 podman[239434]: 2026-02-19 20:09:00.458765101 +0000 UTC m=+0.123286958 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 19 20:09:01 compute-0 openstack_network_exporter[207898]: ERROR   20:09:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:09:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:09:01 compute-0 openstack_network_exporter[207898]: ERROR   20:09:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:09:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:09:02 compute-0 sshd-session[239458]: Invalid user systemd from 103.179.56.24 port 38300
Feb 19 20:09:02 compute-0 podman[239460]: 2026-02-19 20:09:02.359719348 +0000 UTC m=+0.084960934 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, version=9.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9/ubi-minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, architecture=x86_64, build-date=2026-02-05T04:57:10Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=minimal rhel9)
Feb 19 20:09:02 compute-0 sshd-session[239458]: Received disconnect from 103.179.56.24 port 38300:11: Bye Bye [preauth]
Feb 19 20:09:02 compute-0 sshd-session[239458]: Disconnected from invalid user systemd 103.179.56.24 port 38300 [preauth]
Feb 19 20:09:06 compute-0 podman[239481]: 2026-02-19 20:09:06.410893161 +0000 UTC m=+0.095910563 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Feb 19 20:09:10 compute-0 podman[239500]: 2026-02-19 20:09:10.391437103 +0000 UTC m=+0.071555710 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Feb 19 20:09:12 compute-0 podman[239519]: 2026-02-19 20:09:12.381245415 +0000 UTC m=+0.069000842 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, version=9.4, release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., container_name=kepler, io.buildah.version=1.29.0, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, name=ubi9, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 19 20:09:14 compute-0 podman[239538]: 2026-02-19 20:09:14.770286433 +0000 UTC m=+0.081250460 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.132 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.133 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.133 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f5246810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.134 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fa4f6728800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.135 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f5246810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.135 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f5246810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.136 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f5246810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.136 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f5246810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.136 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f5246810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.136 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f5246810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.137 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f5246810>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.137 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f5246810>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.137 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f5246810>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.138 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a390>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f5246810>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.138 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f5246810>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.138 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f5246810>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.138 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f5246810>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.138 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f5246810>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.137 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.139 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fa4f672a480>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.139 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.139 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f5246810>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.140 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f5246810>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f5246810>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f5246810>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f5246810>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f5246810>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f5246810>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f5246810>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f5246810>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.140 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fa4f672a180>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.143 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.143 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fa4f672bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.143 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f5246810>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.144 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f5246810>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.144 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fa4f672a270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.145 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.145 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fa4f6728ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.145 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.145 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fa4f672a300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.146 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.146 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fa4f672ab70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.146 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.146 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fa4f6728290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.146 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.147 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fa4f69216a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.147 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.147 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fa4f67286b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.147 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.147 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fa4f67283b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.147 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.147 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fa4f672a120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.148 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.148 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fa4f672a1b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.148 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.148 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fa4f6728410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.148 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.148 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fa4f672a150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.149 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.149 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fa4f6728470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.149 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.149 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fa4f68f6030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.150 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.150 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fa4f672ab10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.150 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.150 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fa4f6728500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.151 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.151 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fa4f672a0c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.151 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.151 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fa4f6728560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.152 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.152 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fa4f67285c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.152 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.152 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fa4f6728620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.153 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.153 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fa4f672be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.153 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.153 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fa4f672be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.154 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.154 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.155 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.155 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.155 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.155 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.155 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.155 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.155 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.156 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.156 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.156 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.156 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.156 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.156 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.157 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.157 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.157 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.157 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.157 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.157 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.157 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.157 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.158 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.158 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.158 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:09:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:09:15.158 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:09:16 compute-0 podman[239561]: 2026-02-19 20:09:16.43511889 +0000 UTC m=+0.120465350 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2)
Feb 19 20:09:19 compute-0 podman[239580]: 2026-02-19 20:09:19.434333951 +0000 UTC m=+0.120268584 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 19 20:09:22 compute-0 sshd-session[239608]: Invalid user liang from 158.174.210.161 port 20219
Feb 19 20:09:23 compute-0 sshd-session[239608]: Received disconnect from 158.174.210.161 port 20219:11: Bye Bye [preauth]
Feb 19 20:09:23 compute-0 sshd-session[239608]: Disconnected from invalid user liang 158.174.210.161 port 20219 [preauth]
Feb 19 20:09:24 compute-0 nova_compute[188777]: 2026-02-19 20:09:24.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:09:24 compute-0 nova_compute[188777]: 2026-02-19 20:09:24.264 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 19 20:09:24 compute-0 nova_compute[188777]: 2026-02-19 20:09:24.287 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 19 20:09:24 compute-0 nova_compute[188777]: 2026-02-19 20:09:24.288 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:09:24 compute-0 nova_compute[188777]: 2026-02-19 20:09:24.288 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 19 20:09:24 compute-0 nova_compute[188777]: 2026-02-19 20:09:24.304 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:09:24 compute-0 sshd-session[239610]: Invalid user n8n from 125.31.2.160 port 51584
Feb 19 20:09:24 compute-0 sshd-session[239610]: Received disconnect from 125.31.2.160 port 51584:11: Bye Bye [preauth]
Feb 19 20:09:24 compute-0 sshd-session[239610]: Disconnected from invalid user n8n 125.31.2.160 port 51584 [preauth]
Feb 19 20:09:26 compute-0 nova_compute[188777]: 2026-02-19 20:09:26.320 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:09:26 compute-0 nova_compute[188777]: 2026-02-19 20:09:26.321 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:09:26 compute-0 nova_compute[188777]: 2026-02-19 20:09:26.353 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:09:26 compute-0 nova_compute[188777]: 2026-02-19 20:09:26.354 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:09:26 compute-0 nova_compute[188777]: 2026-02-19 20:09:26.354 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:09:26 compute-0 nova_compute[188777]: 2026-02-19 20:09:26.355 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:09:26 compute-0 nova_compute[188777]: 2026-02-19 20:09:26.776 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:09:26 compute-0 nova_compute[188777]: 2026-02-19 20:09:26.778 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5691MB free_disk=72.30258560180664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:09:26 compute-0 nova_compute[188777]: 2026-02-19 20:09:26.778 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:09:26 compute-0 nova_compute[188777]: 2026-02-19 20:09:26.779 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:09:26 compute-0 nova_compute[188777]: 2026-02-19 20:09:26.840 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:09:26 compute-0 nova_compute[188777]: 2026-02-19 20:09:26.840 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:09:26 compute-0 nova_compute[188777]: 2026-02-19 20:09:26.944 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Refreshing inventories for resource provider c266959e-952e-41ad-bc2e-56513f39ec2d _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 19 20:09:26 compute-0 nova_compute[188777]: 2026-02-19 20:09:26.995 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Updating ProviderTree inventory for provider c266959e-952e-41ad-bc2e-56513f39ec2d from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 19 20:09:26 compute-0 nova_compute[188777]: 2026-02-19 20:09:26.996 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Updating inventory in ProviderTree for provider c266959e-952e-41ad-bc2e-56513f39ec2d with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 19 20:09:27 compute-0 nova_compute[188777]: 2026-02-19 20:09:27.050 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Refreshing aggregate associations for resource provider c266959e-952e-41ad-bc2e-56513f39ec2d, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 19 20:09:27 compute-0 nova_compute[188777]: 2026-02-19 20:09:27.073 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Refreshing trait associations for resource provider c266959e-952e-41ad-bc2e-56513f39ec2d, traits: HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,HW_CPU_X86_SHA,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NODE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,HW_CPU_X86_F16C,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_ABM,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_MMX,HW_CPU_X86_BMI2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 19 20:09:27 compute-0 nova_compute[188777]: 2026-02-19 20:09:27.096 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:09:27 compute-0 nova_compute[188777]: 2026-02-19 20:09:27.110 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:09:27 compute-0 nova_compute[188777]: 2026-02-19 20:09:27.112 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:09:27 compute-0 nova_compute[188777]: 2026-02-19 20:09:27.113 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.334s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:09:28 compute-0 nova_compute[188777]: 2026-02-19 20:09:28.056 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:09:28 compute-0 nova_compute[188777]: 2026-02-19 20:09:28.260 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:09:28 compute-0 nova_compute[188777]: 2026-02-19 20:09:28.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:09:28 compute-0 nova_compute[188777]: 2026-02-19 20:09:28.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:09:29 compute-0 nova_compute[188777]: 2026-02-19 20:09:29.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:09:29 compute-0 nova_compute[188777]: 2026-02-19 20:09:29.264 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:09:29 compute-0 podman[204724]: time="2026-02-19T20:09:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:09:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:09:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28006 "" "Go-http-client/1.1"
Feb 19 20:09:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:09:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3842 "" "Go-http-client/1.1"
Feb 19 20:09:30 compute-0 sshd-session[239612]: Invalid user dixi from 83.235.16.111 port 46044
Feb 19 20:09:30 compute-0 nova_compute[188777]: 2026-02-19 20:09:30.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:09:30 compute-0 nova_compute[188777]: 2026-02-19 20:09:30.266 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:09:30 compute-0 nova_compute[188777]: 2026-02-19 20:09:30.267 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 19 20:09:30 compute-0 nova_compute[188777]: 2026-02-19 20:09:30.281 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 19 20:09:30 compute-0 nova_compute[188777]: 2026-02-19 20:09:30.282 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:09:30 compute-0 sshd-session[239612]: Received disconnect from 83.235.16.111 port 46044:11: Bye Bye [preauth]
Feb 19 20:09:30 compute-0 sshd-session[239612]: Disconnected from invalid user dixi 83.235.16.111 port 46044 [preauth]
Feb 19 20:09:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:09:30.417 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:09:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:09:30.418 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:09:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:09:30.418 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:09:31 compute-0 podman[239614]: 2026-02-19 20:09:31.409968146 +0000 UTC m=+0.090684831 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 19 20:09:31 compute-0 openstack_network_exporter[207898]: ERROR   20:09:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:09:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:09:31 compute-0 openstack_network_exporter[207898]: ERROR   20:09:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:09:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:09:32 compute-0 sshd-session[239637]: Accepted publickey for zuul from 38.102.83.176 port 56624 ssh2: RSA SHA256:Tz8+J60H2NvCUNbrBLaXS+pTxQ8qAPOs7gJ/OpaGYjQ
Feb 19 20:09:32 compute-0 systemd-logind[810]: New session 28 of user zuul.
Feb 19 20:09:32 compute-0 systemd[1]: Started Session 28 of User zuul.
Feb 19 20:09:32 compute-0 sshd-session[239637]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 19 20:09:32 compute-0 podman[239639]: 2026-02-19 20:09:32.631807004 +0000 UTC m=+0.085055278 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, distribution-scope=public, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, managed_by=edpm_ansible, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, version=9.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, release=1770267347, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter)
Feb 19 20:09:33 compute-0 python3[239834]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 19 20:09:35 compute-0 sudo[240055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slxdbkbkjmibfyoblzbogkzifmjbhdat ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771531775.429398-37434-186472913977811/AnsiballZ_command.py'
Feb 19 20:09:35 compute-0 sudo[240055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:09:36 compute-0 python3[240058]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")
                                           journalctl -t "ceilometer_agent_compute" --no-pager -S "${tstamp}"
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 20:09:36 compute-0 sudo[240055]: pam_unix(sudo:session): session closed for user root
Feb 19 20:09:36 compute-0 podman[240184]: 2026-02-19 20:09:36.929125147 +0000 UTC m=+0.074724358 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent)
Feb 19 20:09:36 compute-0 sudo[240225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwwjsejbiqpopwlmsqtcoosjqvysphdz ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771531776.5399725-37445-258479685214262/AnsiballZ_command.py'
Feb 19 20:09:36 compute-0 sudo[240225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:09:37 compute-0 python3[240230]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")
                                           journalctl -t "nova_compute" --no-pager -S "${tstamp}"
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 20:09:38 compute-0 sudo[240225]: pam_unix(sudo:session): session closed for user root
Feb 19 20:09:39 compute-0 python3[240381]: ansible-ansible.builtin.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 19 20:09:40 compute-0 sudo[240547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjydmujsbidlaxgwzjwpejsksimztjdx ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771531780.4386878-37491-25972677806794/AnsiballZ_setup.py'
Feb 19 20:09:40 compute-0 podman[240507]: 2026-02-19 20:09:40.806962438 +0000 UTC m=+0.065127773 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 19 20:09:40 compute-0 sudo[240547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:09:41 compute-0 python3[240553]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 19 20:09:42 compute-0 sudo[240547]: pam_unix(sudo:session): session closed for user root
Feb 19 20:09:43 compute-0 sudo[240789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhprjhmlbuqzlsujiufcjtkuaufiweyx ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771531782.6630468-37522-202347335692162/AnsiballZ_command.py'
Feb 19 20:09:43 compute-0 sudo[240789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:09:43 compute-0 podman[240751]: 2026-02-19 20:09:43.063810563 +0000 UTC m=+0.112089278 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release=1214.1726694543, version=9.4, architecture=x86_64, com.redhat.component=ubi9-container, io.openshift.expose-services=, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, release-0.7.12=, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Feb 19 20:09:43 compute-0 python3[240796]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 20:09:43 compute-0 sudo[240789]: pam_unix(sudo:session): session closed for user root
Feb 19 20:09:44 compute-0 sudo[240962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uguhamezwscjusjbylpuvbvkcawbamwb ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771531783.7122276-37539-1526766505594/AnsiballZ_command.py'
Feb 19 20:09:44 compute-0 sudo[240962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:09:44 compute-0 python3[240965]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 20:09:44 compute-0 sudo[240962]: pam_unix(sudo:session): session closed for user root
Feb 19 20:09:45 compute-0 podman[241005]: 2026-02-19 20:09:45.427482917 +0000 UTC m=+0.098253110 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 19 20:09:47 compute-0 podman[241026]: 2026-02-19 20:09:47.422564132 +0000 UTC m=+0.101105056 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:09:50 compute-0 podman[241047]: 2026-02-19 20:09:50.426068161 +0000 UTC m=+0.108473839 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Feb 19 20:09:59 compute-0 podman[204724]: time="2026-02-19T20:09:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:09:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:09:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28006 "" "Go-http-client/1.1"
Feb 19 20:09:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:09:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3842 "" "Go-http-client/1.1"
Feb 19 20:10:01 compute-0 openstack_network_exporter[207898]: ERROR   20:10:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:10:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:10:01 compute-0 openstack_network_exporter[207898]: ERROR   20:10:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:10:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:10:02 compute-0 podman[241072]: 2026-02-19 20:10:02.411212058 +0000 UTC m=+0.099665653 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 19 20:10:03 compute-0 podman[241095]: 2026-02-19 20:10:03.413270597 +0000 UTC m=+0.100351204 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9/ubi-minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-02-05T04:57:10Z, architecture=x86_64, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, distribution-scope=public, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, release=1770267347, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible)
Feb 19 20:10:07 compute-0 podman[241116]: 2026-02-19 20:10:07.415792411 +0000 UTC m=+0.097365243 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 19 20:10:11 compute-0 podman[241134]: 2026-02-19 20:10:11.410225491 +0000 UTC m=+0.090006500 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 19 20:10:13 compute-0 podman[241154]: 2026-02-19 20:10:13.378488286 +0000 UTC m=+0.069219782 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., name=ubi9, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.buildah.version=1.29.0, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, container_name=kepler, distribution-scope=public, io.openshift.expose-services=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, version=9.4, config_id=kepler)
Feb 19 20:10:16 compute-0 podman[241174]: 2026-02-19 20:10:16.428940104 +0000 UTC m=+0.107971145 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 19 20:10:18 compute-0 podman[241198]: 2026-02-19 20:10:18.417864173 +0000 UTC m=+0.104217829 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image)
Feb 19 20:10:21 compute-0 podman[241218]: 2026-02-19 20:10:21.475709753 +0000 UTC m=+0.152461947 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller)
Feb 19 20:10:28 compute-0 nova_compute[188777]: 2026-02-19 20:10:28.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:10:28 compute-0 nova_compute[188777]: 2026-02-19 20:10:28.286 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:10:28 compute-0 nova_compute[188777]: 2026-02-19 20:10:28.286 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:10:28 compute-0 nova_compute[188777]: 2026-02-19 20:10:28.310 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:10:28 compute-0 nova_compute[188777]: 2026-02-19 20:10:28.310 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:10:28 compute-0 nova_compute[188777]: 2026-02-19 20:10:28.311 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:10:28 compute-0 nova_compute[188777]: 2026-02-19 20:10:28.311 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:10:28 compute-0 nova_compute[188777]: 2026-02-19 20:10:28.710 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:10:28 compute-0 nova_compute[188777]: 2026-02-19 20:10:28.712 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5698MB free_disk=72.3005599975586GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:10:28 compute-0 nova_compute[188777]: 2026-02-19 20:10:28.713 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:10:28 compute-0 nova_compute[188777]: 2026-02-19 20:10:28.713 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:10:28 compute-0 nova_compute[188777]: 2026-02-19 20:10:28.777 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:10:28 compute-0 nova_compute[188777]: 2026-02-19 20:10:28.778 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:10:28 compute-0 nova_compute[188777]: 2026-02-19 20:10:28.802 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:10:28 compute-0 nova_compute[188777]: 2026-02-19 20:10:28.849 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:10:28 compute-0 nova_compute[188777]: 2026-02-19 20:10:28.851 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:10:28 compute-0 nova_compute[188777]: 2026-02-19 20:10:28.851 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:10:29 compute-0 podman[204724]: time="2026-02-19T20:10:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:10:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:10:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28006 "" "Go-http-client/1.1"
Feb 19 20:10:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:10:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3851 "" "Go-http-client/1.1"
Feb 19 20:10:29 compute-0 nova_compute[188777]: 2026-02-19 20:10:29.829 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:10:29 compute-0 nova_compute[188777]: 2026-02-19 20:10:29.830 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:10:29 compute-0 nova_compute[188777]: 2026-02-19 20:10:29.831 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:10:29 compute-0 nova_compute[188777]: 2026-02-19 20:10:29.831 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:10:30 compute-0 nova_compute[188777]: 2026-02-19 20:10:30.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:10:30 compute-0 nova_compute[188777]: 2026-02-19 20:10:30.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:10:30 compute-0 nova_compute[188777]: 2026-02-19 20:10:30.266 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:10:30 compute-0 nova_compute[188777]: 2026-02-19 20:10:30.266 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 19 20:10:30 compute-0 nova_compute[188777]: 2026-02-19 20:10:30.322 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 19 20:10:30 compute-0 nova_compute[188777]: 2026-02-19 20:10:30.323 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:10:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:10:30.418 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:10:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:10:30.419 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:10:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:10:30.419 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:10:31 compute-0 openstack_network_exporter[207898]: ERROR   20:10:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:10:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:10:31 compute-0 openstack_network_exporter[207898]: ERROR   20:10:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:10:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:10:32 compute-0 nova_compute[188777]: 2026-02-19 20:10:32.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:10:33 compute-0 podman[241245]: 2026-02-19 20:10:33.387562899 +0000 UTC m=+0.070528673 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 19 20:10:34 compute-0 podman[241269]: 2026-02-19 20:10:34.417318135 +0000 UTC m=+0.099717374 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, build-date=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, name=ubi9/ubi-minimal, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347)
Feb 19 20:10:38 compute-0 podman[241290]: 2026-02-19 20:10:38.381509251 +0000 UTC m=+0.074398789 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:10:42 compute-0 podman[241308]: 2026-02-19 20:10:42.394756741 +0000 UTC m=+0.080631387 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20260127)
Feb 19 20:10:43 compute-0 sshd-session[239653]: Received disconnect from 38.102.83.176 port 56624:11: disconnected by user
Feb 19 20:10:43 compute-0 sshd-session[239653]: Disconnected from user zuul 38.102.83.176 port 56624
Feb 19 20:10:43 compute-0 sshd-session[239637]: pam_unix(sshd:session): session closed for user zuul
Feb 19 20:10:43 compute-0 systemd[1]: session-28.scope: Deactivated successfully.
Feb 19 20:10:43 compute-0 systemd[1]: session-28.scope: Consumed 9.717s CPU time.
Feb 19 20:10:43 compute-0 systemd-logind[810]: Session 28 logged out. Waiting for processes to exit.
Feb 19 20:10:43 compute-0 systemd-logind[810]: Removed session 28.
Feb 19 20:10:43 compute-0 podman[241328]: 2026-02-19 20:10:43.773859583 +0000 UTC m=+0.103031494 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release-0.7.12=, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, config_id=kepler, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, io.openshift.tags=base rhel9, name=ubi9)
Feb 19 20:10:47 compute-0 podman[241347]: 2026-02-19 20:10:47.393277776 +0000 UTC m=+0.074391414 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 19 20:10:49 compute-0 podman[241371]: 2026-02-19 20:10:49.449413189 +0000 UTC m=+0.128106423 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0)
Feb 19 20:10:52 compute-0 podman[241393]: 2026-02-19 20:10:52.477044401 +0000 UTC m=+0.152277688 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 19 20:10:57 compute-0 sshd-session[241417]: Invalid user omega from 103.119.94.10 port 51664
Feb 19 20:10:57 compute-0 sshd-session[241417]: Received disconnect from 103.119.94.10 port 51664:11: Bye Bye [preauth]
Feb 19 20:10:57 compute-0 sshd-session[241417]: Disconnected from invalid user omega 103.119.94.10 port 51664 [preauth]
Feb 19 20:10:59 compute-0 podman[204724]: time="2026-02-19T20:10:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:10:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:10:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28006 "" "Go-http-client/1.1"
Feb 19 20:10:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:10:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3853 "" "Go-http-client/1.1"
Feb 19 20:11:01 compute-0 openstack_network_exporter[207898]: ERROR   20:11:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:11:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:11:01 compute-0 openstack_network_exporter[207898]: ERROR   20:11:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:11:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:11:04 compute-0 podman[241419]: 2026-02-19 20:11:04.394196856 +0000 UTC m=+0.077334037 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 19 20:11:05 compute-0 podman[241444]: 2026-02-19 20:11:05.389343272 +0000 UTC m=+0.077859863 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, distribution-scope=public, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.openshift.expose-services=, version=9.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, name=ubi9/ubi-minimal, container_name=openstack_network_exporter, vcs-type=git, build-date=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=ubi9-minimal-container)
Feb 19 20:11:09 compute-0 podman[241464]: 2026-02-19 20:11:09.413726862 +0000 UTC m=+0.099460089 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 19 20:11:13 compute-0 podman[241484]: 2026-02-19 20:11:13.420949304 +0000 UTC m=+0.095402321 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb 19 20:11:14 compute-0 podman[241505]: 2026-02-19 20:11:14.415535415 +0000 UTC m=+0.099973784 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=kepler, release=1214.1726694543, build-date=2024-09-18T21:23:30, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, container_name=kepler, release-0.7.12=, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, name=ubi9, architecture=x86_64, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.136 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.137 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.137 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.139 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fa4f6728800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.140 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.140 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.140 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.140 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.140 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.140 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.140 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.140 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a390>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.142 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.142 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fa4f672a480>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.143 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.143 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fa4f672a180>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.143 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.143 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fa4f672bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.143 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.143 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fa4f672a270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.143 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.143 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fa4f6728ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.143 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.144 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fa4f672a300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.144 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.144 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fa4f672ab70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.144 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.144 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fa4f6728290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.144 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.144 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fa4f69216a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.144 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.144 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fa4f67286b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.144 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.144 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fa4f67283b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.144 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.144 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fa4f672a120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.145 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.145 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fa4f672a1b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.145 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.145 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fa4f6728410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.145 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.145 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fa4f672a150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.145 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.145 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fa4f6728470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.145 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.145 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fa4f68f6030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.145 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.145 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fa4f672ab10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.145 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.145 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fa4f6728500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.146 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.146 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fa4f672a0c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.146 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.146 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fa4f6728560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.146 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.146 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fa4f67285c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.146 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.146 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fa4f6728620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.146 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.146 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fa4f672be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.146 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.146 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fa4f672be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.146 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.147 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.147 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.147 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.147 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.147 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.147 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.147 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.147 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.147 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.148 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.148 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.148 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.148 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.148 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.148 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.148 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.148 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.148 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.148 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.148 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.148 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.148 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.149 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.149 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.149 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:11:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:11:15.149 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:11:18 compute-0 podman[241526]: 2026-02-19 20:11:18.403110144 +0000 UTC m=+0.092599724 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 19 20:11:20 compute-0 podman[241551]: 2026-02-19 20:11:20.431170138 +0000 UTC m=+0.116640604 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute)
Feb 19 20:11:23 compute-0 podman[241570]: 2026-02-19 20:11:23.428628428 +0000 UTC m=+0.110915536 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 19 20:11:26 compute-0 sshd-session[241597]: Invalid user user01 from 103.154.77.48 port 37656
Feb 19 20:11:26 compute-0 sshd-session[241597]: Received disconnect from 103.154.77.48 port 37656:11: Bye Bye [preauth]
Feb 19 20:11:26 compute-0 sshd-session[241597]: Disconnected from invalid user user01 103.154.77.48 port 37656 [preauth]
Feb 19 20:11:29 compute-0 podman[204724]: time="2026-02-19T20:11:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:11:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:11:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28006 "" "Go-http-client/1.1"
Feb 19 20:11:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:11:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3852 "" "Go-http-client/1.1"
Feb 19 20:11:30 compute-0 nova_compute[188777]: 2026-02-19 20:11:30.260 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:11:30 compute-0 nova_compute[188777]: 2026-02-19 20:11:30.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:11:30 compute-0 nova_compute[188777]: 2026-02-19 20:11:30.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:11:30 compute-0 nova_compute[188777]: 2026-02-19 20:11:30.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:11:30 compute-0 nova_compute[188777]: 2026-02-19 20:11:30.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:11:30 compute-0 nova_compute[188777]: 2026-02-19 20:11:30.265 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:11:30 compute-0 nova_compute[188777]: 2026-02-19 20:11:30.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:11:30 compute-0 nova_compute[188777]: 2026-02-19 20:11:30.302 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:11:30 compute-0 nova_compute[188777]: 2026-02-19 20:11:30.303 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:11:30 compute-0 nova_compute[188777]: 2026-02-19 20:11:30.303 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:11:30 compute-0 nova_compute[188777]: 2026-02-19 20:11:30.304 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:11:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:11:30.419 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:11:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:11:30.419 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:11:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:11:30.420 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:11:30 compute-0 nova_compute[188777]: 2026-02-19 20:11:30.613 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:11:30 compute-0 nova_compute[188777]: 2026-02-19 20:11:30.614 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5696MB free_disk=72.30054092407227GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:11:30 compute-0 nova_compute[188777]: 2026-02-19 20:11:30.615 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:11:30 compute-0 nova_compute[188777]: 2026-02-19 20:11:30.615 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:11:30 compute-0 nova_compute[188777]: 2026-02-19 20:11:30.829 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:11:30 compute-0 nova_compute[188777]: 2026-02-19 20:11:30.830 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:11:30 compute-0 nova_compute[188777]: 2026-02-19 20:11:30.859 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:11:30 compute-0 nova_compute[188777]: 2026-02-19 20:11:30.944 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:11:30 compute-0 nova_compute[188777]: 2026-02-19 20:11:30.948 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:11:30 compute-0 nova_compute[188777]: 2026-02-19 20:11:30.949 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.334s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:11:31 compute-0 sshd-session[241599]: Received disconnect from 125.94.106.195 port 52954:11: Bye Bye [preauth]
Feb 19 20:11:31 compute-0 sshd-session[241599]: Disconnected from authenticating user root 125.94.106.195 port 52954 [preauth]
Feb 19 20:11:31 compute-0 openstack_network_exporter[207898]: ERROR   20:11:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:11:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:11:31 compute-0 openstack_network_exporter[207898]: ERROR   20:11:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:11:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:11:31 compute-0 nova_compute[188777]: 2026-02-19 20:11:31.950 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:11:32 compute-0 nova_compute[188777]: 2026-02-19 20:11:32.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:11:32 compute-0 nova_compute[188777]: 2026-02-19 20:11:32.265 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:11:32 compute-0 nova_compute[188777]: 2026-02-19 20:11:32.265 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 19 20:11:32 compute-0 nova_compute[188777]: 2026-02-19 20:11:32.311 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 19 20:11:34 compute-0 nova_compute[188777]: 2026-02-19 20:11:34.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:11:35 compute-0 podman[241601]: 2026-02-19 20:11:35.416386107 +0000 UTC m=+0.101401289 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 19 20:11:35 compute-0 podman[241625]: 2026-02-19 20:11:35.535581011 +0000 UTC m=+0.081146396 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, org.opencontainers.image.created=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, version=9.7, build-date=2026-02-05T04:57:10Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, config_id=openstack_network_exporter, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, architecture=x86_64, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, distribution-scope=public)
Feb 19 20:11:39 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:11:39.154 108175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1e:ad:15', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:0d:ba:1d:25:53'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 19 20:11:39 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:11:39.156 108175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 19 20:11:39 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:11:39.158 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e2fe6bb6-fad0-4563-8388-215a30f03e3f, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:11:40 compute-0 podman[241647]: 2026-02-19 20:11:40.42234439 +0000 UTC m=+0.095976839 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent)
Feb 19 20:11:44 compute-0 podman[241666]: 2026-02-19 20:11:44.436385096 +0000 UTC m=+0.117436539 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 19 20:11:44 compute-0 podman[241684]: 2026-02-19 20:11:44.607837942 +0000 UTC m=+0.131227421 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, container_name=kepler, vendor=Red Hat, Inc., name=ubi9, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, release=1214.1726694543, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, vcs-type=git)
Feb 19 20:11:48 compute-0 sshd-session[241704]: Invalid user vagrant from 103.213.238.91 port 36140
Feb 19 20:11:48 compute-0 podman[241706]: 2026-02-19 20:11:48.638238389 +0000 UTC m=+0.088788515 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 19 20:11:48 compute-0 sshd-session[241704]: Received disconnect from 103.213.238.91 port 36140:11: Bye Bye [preauth]
Feb 19 20:11:48 compute-0 sshd-session[241704]: Disconnected from invalid user vagrant 103.213.238.91 port 36140 [preauth]
Feb 19 20:11:50 compute-0 sshd-session[241729]: Received disconnect from 103.250.11.249 port 50722:11: Bye Bye [preauth]
Feb 19 20:11:50 compute-0 sshd-session[241729]: Disconnected from authenticating user root 103.250.11.249 port 50722 [preauth]
Feb 19 20:11:51 compute-0 podman[241732]: 2026-02-19 20:11:51.434008836 +0000 UTC m=+0.115322124 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute)
Feb 19 20:11:54 compute-0 podman[241751]: 2026-02-19 20:11:54.445769968 +0000 UTC m=+0.122635307 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Feb 19 20:11:59 compute-0 podman[204724]: time="2026-02-19T20:11:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:11:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:11:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28006 "" "Go-http-client/1.1"
Feb 19 20:11:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:11:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3842 "" "Go-http-client/1.1"
Feb 19 20:12:01 compute-0 openstack_network_exporter[207898]: ERROR   20:12:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:12:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:12:01 compute-0 openstack_network_exporter[207898]: ERROR   20:12:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:12:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:12:06 compute-0 podman[241777]: 2026-02-19 20:12:06.427390437 +0000 UTC m=+0.108057341 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1770267347, build-date=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, config_id=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, vcs-type=git, version=9.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, name=ubi9/ubi-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc.)
Feb 19 20:12:06 compute-0 podman[241778]: 2026-02-19 20:12:06.427741508 +0000 UTC m=+0.108451473 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 19 20:12:11 compute-0 podman[241822]: 2026-02-19 20:12:11.407072692 +0000 UTC m=+0.097992226 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent)
Feb 19 20:12:14 compute-0 podman[241842]: 2026-02-19 20:12:14.797553723 +0000 UTC m=+0.105154580 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 20:12:14 compute-0 podman[241841]: 2026-02-19 20:12:14.828624425 +0000 UTC m=+0.141769885 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, io.openshift.tags=base rhel9, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=kepler, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.buildah.version=1.29.0, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, managed_by=edpm_ansible, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64)
Feb 19 20:12:19 compute-0 podman[241880]: 2026-02-19 20:12:19.374370149 +0000 UTC m=+0.063500597 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 19 20:12:22 compute-0 podman[241906]: 2026-02-19 20:12:22.479228407 +0000 UTC m=+0.160568284 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0)
Feb 19 20:12:25 compute-0 podman[241925]: 2026-02-19 20:12:25.465615839 +0000 UTC m=+0.139653969 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb 19 20:12:29 compute-0 sshd-session[241949]: Invalid user roman from 154.12.80.151 port 34298
Feb 19 20:12:29 compute-0 sshd-session[241949]: Received disconnect from 154.12.80.151 port 34298:11: Bye Bye [preauth]
Feb 19 20:12:29 compute-0 sshd-session[241949]: Disconnected from invalid user roman 154.12.80.151 port 34298 [preauth]
Feb 19 20:12:29 compute-0 podman[204724]: time="2026-02-19T20:12:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:12:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:12:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28006 "" "Go-http-client/1.1"
Feb 19 20:12:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:12:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3855 "" "Go-http-client/1.1"
Feb 19 20:12:30 compute-0 nova_compute[188777]: 2026-02-19 20:12:30.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:12:30 compute-0 nova_compute[188777]: 2026-02-19 20:12:30.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:12:30 compute-0 nova_compute[188777]: 2026-02-19 20:12:30.265 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:12:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:30.420 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:12:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:30.420 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:12:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:30.420 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:12:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:30.808 108175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1e:ad:15', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:0d:ba:1d:25:53'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 19 20:12:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:30.809 108175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 19 20:12:31 compute-0 nova_compute[188777]: 2026-02-19 20:12:31.261 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:12:31 compute-0 nova_compute[188777]: 2026-02-19 20:12:31.280 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:12:31 compute-0 openstack_network_exporter[207898]: ERROR   20:12:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:12:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:12:31 compute-0 openstack_network_exporter[207898]: ERROR   20:12:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:12:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:12:31 compute-0 sshd-session[241951]: Invalid user admin from 160.187.147.124 port 36220
Feb 19 20:12:32 compute-0 nova_compute[188777]: 2026-02-19 20:12:32.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:12:32 compute-0 nova_compute[188777]: 2026-02-19 20:12:32.268 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:12:32 compute-0 nova_compute[188777]: 2026-02-19 20:12:32.268 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:12:32 compute-0 nova_compute[188777]: 2026-02-19 20:12:32.269 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 19 20:12:32 compute-0 nova_compute[188777]: 2026-02-19 20:12:32.283 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 19 20:12:32 compute-0 nova_compute[188777]: 2026-02-19 20:12:32.283 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:12:32 compute-0 nova_compute[188777]: 2026-02-19 20:12:32.283 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:12:32 compute-0 nova_compute[188777]: 2026-02-19 20:12:32.284 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:12:32 compute-0 nova_compute[188777]: 2026-02-19 20:12:32.309 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:12:32 compute-0 nova_compute[188777]: 2026-02-19 20:12:32.310 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:12:32 compute-0 nova_compute[188777]: 2026-02-19 20:12:32.311 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:12:32 compute-0 nova_compute[188777]: 2026-02-19 20:12:32.312 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:12:32 compute-0 sshd-session[241951]: Received disconnect from 160.187.147.124 port 36220:11: Bye Bye [preauth]
Feb 19 20:12:32 compute-0 sshd-session[241951]: Disconnected from invalid user admin 160.187.147.124 port 36220 [preauth]
Feb 19 20:12:32 compute-0 nova_compute[188777]: 2026-02-19 20:12:32.669 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:12:32 compute-0 nova_compute[188777]: 2026-02-19 20:12:32.670 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5690MB free_disk=72.30047988891602GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:12:32 compute-0 nova_compute[188777]: 2026-02-19 20:12:32.671 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:12:32 compute-0 nova_compute[188777]: 2026-02-19 20:12:32.671 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:12:32 compute-0 nova_compute[188777]: 2026-02-19 20:12:32.779 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:12:32 compute-0 nova_compute[188777]: 2026-02-19 20:12:32.779 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:12:32 compute-0 nova_compute[188777]: 2026-02-19 20:12:32.818 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:12:32 compute-0 nova_compute[188777]: 2026-02-19 20:12:32.834 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:12:32 compute-0 nova_compute[188777]: 2026-02-19 20:12:32.838 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:12:32 compute-0 nova_compute[188777]: 2026-02-19 20:12:32.839 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.167s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:12:34 compute-0 sshd-session[241953]: Invalid user iksi from 83.235.16.111 port 51882
Feb 19 20:12:35 compute-0 sshd-session[241953]: Received disconnect from 83.235.16.111 port 51882:11: Bye Bye [preauth]
Feb 19 20:12:35 compute-0 sshd-session[241953]: Disconnected from invalid user iksi 83.235.16.111 port 51882 [preauth]
Feb 19 20:12:35 compute-0 nova_compute[188777]: 2026-02-19 20:12:35.820 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:12:36 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:36.812 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e2fe6bb6-fad0-4563-8388-215a30f03e3f, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:12:37 compute-0 podman[241955]: 2026-02-19 20:12:37.414403761 +0000 UTC m=+0.095005422 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, io.buildah.version=1.33.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9/ubi-minimal, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=ubi9-minimal-container)
Feb 19 20:12:37 compute-0 podman[241956]: 2026-02-19 20:12:37.420037427 +0000 UTC m=+0.094690672 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 19 20:12:40 compute-0 sshd-session[241999]: Received disconnect from 125.31.2.160 port 56976:11: Bye Bye [preauth]
Feb 19 20:12:40 compute-0 sshd-session[241999]: Disconnected from authenticating user root 125.31.2.160 port 56976 [preauth]
Feb 19 20:12:40 compute-0 nova_compute[188777]: 2026-02-19 20:12:40.785 188781 DEBUG oslo_concurrency.lockutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "5aaac42d-946d-4c6f-9bde-23b8b6613b59" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:12:40 compute-0 nova_compute[188777]: 2026-02-19 20:12:40.786 188781 DEBUG oslo_concurrency.lockutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "5aaac42d-946d-4c6f-9bde-23b8b6613b59" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:12:40 compute-0 nova_compute[188777]: 2026-02-19 20:12:40.803 188781 DEBUG nova.compute.manager [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 19 20:12:40 compute-0 nova_compute[188777]: 2026-02-19 20:12:40.903 188781 DEBUG oslo_concurrency.lockutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:12:40 compute-0 nova_compute[188777]: 2026-02-19 20:12:40.903 188781 DEBUG oslo_concurrency.lockutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:12:40 compute-0 nova_compute[188777]: 2026-02-19 20:12:40.913 188781 DEBUG nova.virt.hardware [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 19 20:12:40 compute-0 nova_compute[188777]: 2026-02-19 20:12:40.914 188781 INFO nova.compute.claims [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Claim successful on node compute-0.ctlplane.example.com
Feb 19 20:12:41 compute-0 nova_compute[188777]: 2026-02-19 20:12:41.020 188781 DEBUG nova.compute.provider_tree [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:12:41 compute-0 nova_compute[188777]: 2026-02-19 20:12:41.034 188781 DEBUG nova.scheduler.client.report [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:12:41 compute-0 nova_compute[188777]: 2026-02-19 20:12:41.052 188781 DEBUG oslo_concurrency.lockutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.148s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:12:41 compute-0 nova_compute[188777]: 2026-02-19 20:12:41.053 188781 DEBUG nova.compute.manager [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 19 20:12:41 compute-0 nova_compute[188777]: 2026-02-19 20:12:41.096 188781 DEBUG nova.compute.manager [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 19 20:12:41 compute-0 nova_compute[188777]: 2026-02-19 20:12:41.097 188781 DEBUG nova.network.neutron [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 19 20:12:41 compute-0 nova_compute[188777]: 2026-02-19 20:12:41.120 188781 INFO nova.virt.libvirt.driver [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 19 20:12:41 compute-0 nova_compute[188777]: 2026-02-19 20:12:41.151 188781 DEBUG nova.compute.manager [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 19 20:12:41 compute-0 nova_compute[188777]: 2026-02-19 20:12:41.234 188781 DEBUG nova.compute.manager [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 19 20:12:41 compute-0 nova_compute[188777]: 2026-02-19 20:12:41.236 188781 DEBUG nova.virt.libvirt.driver [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 19 20:12:41 compute-0 nova_compute[188777]: 2026-02-19 20:12:41.237 188781 INFO nova.virt.libvirt.driver [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Creating image(s)
Feb 19 20:12:41 compute-0 nova_compute[188777]: 2026-02-19 20:12:41.238 188781 DEBUG oslo_concurrency.lockutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "/var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:12:41 compute-0 nova_compute[188777]: 2026-02-19 20:12:41.239 188781 DEBUG oslo_concurrency.lockutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "/var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:12:41 compute-0 nova_compute[188777]: 2026-02-19 20:12:41.239 188781 DEBUG oslo_concurrency.lockutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "/var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:12:41 compute-0 nova_compute[188777]: 2026-02-19 20:12:41.240 188781 DEBUG oslo_concurrency.lockutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "ab3f72be2a6a58a25574f1d71543e651d74a575a" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:12:41 compute-0 nova_compute[188777]: 2026-02-19 20:12:41.241 188781 DEBUG oslo_concurrency.lockutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "ab3f72be2a6a58a25574f1d71543e651d74a575a" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:12:41 compute-0 nova_compute[188777]: 2026-02-19 20:12:41.977 188781 WARNING oslo_policy.policy [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Feb 19 20:12:41 compute-0 nova_compute[188777]: 2026-02-19 20:12:41.978 188781 WARNING oslo_policy.policy [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Feb 19 20:12:42 compute-0 nova_compute[188777]: 2026-02-19 20:12:42.377 188781 DEBUG oslo_concurrency.processutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:12:42 compute-0 podman[242001]: 2026-02-19 20:12:42.405570536 +0000 UTC m=+0.089000285 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 19 20:12:42 compute-0 nova_compute[188777]: 2026-02-19 20:12:42.443 188781 DEBUG oslo_concurrency.processutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a.part --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:12:42 compute-0 nova_compute[188777]: 2026-02-19 20:12:42.444 188781 DEBUG nova.virt.images [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] e1a79c75-2fa3-410d-9c4c-91db3eeca51d was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Feb 19 20:12:42 compute-0 nova_compute[188777]: 2026-02-19 20:12:42.447 188781 DEBUG nova.privsep.utils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Feb 19 20:12:42 compute-0 nova_compute[188777]: 2026-02-19 20:12:42.448 188781 DEBUG oslo_concurrency.processutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a.part /var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:12:43 compute-0 nova_compute[188777]: 2026-02-19 20:12:43.077 188781 DEBUG nova.network.neutron [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Successfully created port: 10027d6c-43cc-4a7c-be42-a49c8c914f25 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 19 20:12:43 compute-0 nova_compute[188777]: 2026-02-19 20:12:43.112 188781 DEBUG oslo_concurrency.processutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a.part /var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a.converted" returned: 0 in 0.664s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:12:43 compute-0 nova_compute[188777]: 2026-02-19 20:12:43.117 188781 DEBUG oslo_concurrency.processutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:12:43 compute-0 nova_compute[188777]: 2026-02-19 20:12:43.195 188781 DEBUG oslo_concurrency.processutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a.converted --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:12:43 compute-0 nova_compute[188777]: 2026-02-19 20:12:43.197 188781 DEBUG oslo_concurrency.lockutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "ab3f72be2a6a58a25574f1d71543e651d74a575a" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.956s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:12:43 compute-0 nova_compute[188777]: 2026-02-19 20:12:43.227 188781 INFO oslo.privsep.daemon [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpuo02jhxj/privsep.sock']
Feb 19 20:12:43 compute-0 nova_compute[188777]: 2026-02-19 20:12:43.958 188781 INFO oslo.privsep.daemon [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Spawned new privsep daemon via rootwrap
Feb 19 20:12:43 compute-0 nova_compute[188777]: 2026-02-19 20:12:43.786 242038 INFO oslo.privsep.daemon [-] privsep daemon starting
Feb 19 20:12:43 compute-0 nova_compute[188777]: 2026-02-19 20:12:43.791 242038 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Feb 19 20:12:43 compute-0 nova_compute[188777]: 2026-02-19 20:12:43.793 242038 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Feb 19 20:12:43 compute-0 nova_compute[188777]: 2026-02-19 20:12:43.794 242038 INFO oslo.privsep.daemon [-] privsep daemon running as pid 242038
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.048 188781 DEBUG oslo_concurrency.processutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.125 188781 DEBUG oslo_concurrency.processutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.127 188781 DEBUG oslo_concurrency.lockutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "ab3f72be2a6a58a25574f1d71543e651d74a575a" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.129 188781 DEBUG oslo_concurrency.lockutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "ab3f72be2a6a58a25574f1d71543e651d74a575a" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.155 188781 DEBUG oslo_concurrency.processutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.221 188781 DEBUG oslo_concurrency.processutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.224 188781 DEBUG oslo_concurrency.processutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a,backing_fmt=raw /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.278 188781 DEBUG oslo_concurrency.processutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a,backing_fmt=raw /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk 1073741824" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.280 188781 DEBUG oslo_concurrency.lockutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "ab3f72be2a6a58a25574f1d71543e651d74a575a" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.281 188781 DEBUG oslo_concurrency.processutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.316 188781 DEBUG nova.network.neutron [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Successfully updated port: 10027d6c-43cc-4a7c-be42-a49c8c914f25 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.333 188781 DEBUG oslo_concurrency.lockutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "refresh_cache-5aaac42d-946d-4c6f-9bde-23b8b6613b59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.334 188781 DEBUG oslo_concurrency.lockutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquired lock "refresh_cache-5aaac42d-946d-4c6f-9bde-23b8b6613b59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.335 188781 DEBUG nova.network.neutron [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.361 188781 DEBUG oslo_concurrency.processutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.363 188781 DEBUG nova.virt.disk.api [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Checking if we can resize image /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.364 188781 DEBUG oslo_concurrency.processutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.416 188781 DEBUG oslo_concurrency.processutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.418 188781 DEBUG nova.virt.disk.api [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Cannot resize image /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.418 188781 DEBUG nova.objects.instance [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lazy-loading 'migration_context' on Instance uuid 5aaac42d-946d-4c6f-9bde-23b8b6613b59 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.435 188781 DEBUG oslo_concurrency.lockutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "/var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.435 188781 DEBUG oslo_concurrency.lockutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "/var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.436 188781 DEBUG oslo_concurrency.lockutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "/var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.437 188781 DEBUG oslo_concurrency.lockutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.438 188781 DEBUG oslo_concurrency.lockutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.439 188781 DEBUG oslo_concurrency.processutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.464 188781 DEBUG oslo_concurrency.processutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.465 188781 DEBUG oslo_concurrency.processutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.519 188781 DEBUG oslo_concurrency.processutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.521 188781 DEBUG oslo_concurrency.lockutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.082s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.538 188781 DEBUG nova.network.neutron [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.543 188781 DEBUG oslo_concurrency.processutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.627 188781 DEBUG oslo_concurrency.processutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.630 188781 DEBUG oslo_concurrency.lockutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.631 188781 DEBUG oslo_concurrency.lockutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.664 188781 DEBUG oslo_concurrency.processutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.750 188781 DEBUG oslo_concurrency.processutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.752 188781 DEBUG oslo_concurrency.processutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.837 188781 DEBUG oslo_concurrency.processutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 1073741824" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.838 188781 DEBUG oslo_concurrency.lockutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.207s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.839 188781 DEBUG oslo_concurrency.processutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.872 188781 DEBUG nova.compute.manager [req-1790ce4f-a6b4-49f0-989d-15b86221a07e req-2b10bfb4-2aa6-4a41-a547-d95bbdceadcb 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Received event network-changed-10027d6c-43cc-4a7c-be42-a49c8c914f25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.873 188781 DEBUG nova.compute.manager [req-1790ce4f-a6b4-49f0-989d-15b86221a07e req-2b10bfb4-2aa6-4a41-a547-d95bbdceadcb 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Refreshing instance network info cache due to event network-changed-10027d6c-43cc-4a7c-be42-a49c8c914f25. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.874 188781 DEBUG oslo_concurrency.lockutils [req-1790ce4f-a6b4-49f0-989d-15b86221a07e req-2b10bfb4-2aa6-4a41-a547-d95bbdceadcb 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "refresh_cache-5aaac42d-946d-4c6f-9bde-23b8b6613b59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.897 188781 DEBUG oslo_concurrency.processutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.897 188781 DEBUG nova.virt.libvirt.driver [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.898 188781 DEBUG nova.virt.libvirt.driver [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Ensure instance console log exists: /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.899 188781 DEBUG oslo_concurrency.lockutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.900 188781 DEBUG oslo_concurrency.lockutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:12:44 compute-0 nova_compute[188777]: 2026-02-19 20:12:44.901 188781 DEBUG oslo_concurrency.lockutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.346 188781 DEBUG nova.network.neutron [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Updating instance_info_cache with network_info: [{"id": "10027d6c-43cc-4a7c-be42-a49c8c914f25", "address": "fa:16:3e:e4:9e:14", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.193", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10027d6c-43", "ovs_interfaceid": "10027d6c-43cc-4a7c-be42-a49c8c914f25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.370 188781 DEBUG oslo_concurrency.lockutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Releasing lock "refresh_cache-5aaac42d-946d-4c6f-9bde-23b8b6613b59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.370 188781 DEBUG nova.compute.manager [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Instance network_info: |[{"id": "10027d6c-43cc-4a7c-be42-a49c8c914f25", "address": "fa:16:3e:e4:9e:14", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.193", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10027d6c-43", "ovs_interfaceid": "10027d6c-43cc-4a7c-be42-a49c8c914f25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.371 188781 DEBUG oslo_concurrency.lockutils [req-1790ce4f-a6b4-49f0-989d-15b86221a07e req-2b10bfb4-2aa6-4a41-a547-d95bbdceadcb 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquired lock "refresh_cache-5aaac42d-946d-4c6f-9bde-23b8b6613b59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.371 188781 DEBUG nova.network.neutron [req-1790ce4f-a6b4-49f0-989d-15b86221a07e req-2b10bfb4-2aa6-4a41-a547-d95bbdceadcb 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Refreshing network info cache for port 10027d6c-43cc-4a7c-be42-a49c8c914f25 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.377 188781 DEBUG nova.virt.libvirt.driver [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Start _get_guest_xml network_info=[{"id": "10027d6c-43cc-4a7c-be42-a49c8c914f25", "address": "fa:16:3e:e4:9e:14", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.193", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10027d6c-43", "ovs_interfaceid": "10027d6c-43cc-4a7c-be42-a49c8c914f25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-02-19T20:11:25Z,direct_url=<?>,disk_format='qcow2',id=e1a79c75-2fa3-410d-9c4c-91db3eeca51d,min_disk=0,min_ram=0,name='cirros',owner='59f01dee51a74ac1a9f82733f591827d',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-02-19T20:11:26Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'encryption_secret_uuid': None, 'image_id': 'e1a79c75-2fa3-410d-9c4c-91db3eeca51d'}], 'ephemerals': [{'device_name': '/dev/vdb', 'guest_format': None, 'size': 1, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'encryption_options': None, 'encryption_secret_uuid': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.388 188781 WARNING nova.virt.libvirt.driver [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.403 188781 DEBUG nova.virt.libvirt.host [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.404 188781 DEBUG nova.virt.libvirt.host [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.409 188781 DEBUG nova.virt.libvirt.host [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.409 188781 DEBUG nova.virt.libvirt.host [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.410 188781 DEBUG nova.virt.libvirt.driver [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.411 188781 DEBUG nova.virt.hardware [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-19T20:11:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='8030bc1a-9afb-4678-ac07-8b59a1275925',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-02-19T20:11:25Z,direct_url=<?>,disk_format='qcow2',id=e1a79c75-2fa3-410d-9c4c-91db3eeca51d,min_disk=0,min_ram=0,name='cirros',owner='59f01dee51a74ac1a9f82733f591827d',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-02-19T20:11:26Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.411 188781 DEBUG nova.virt.hardware [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.412 188781 DEBUG nova.virt.hardware [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.412 188781 DEBUG nova.virt.hardware [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.413 188781 DEBUG nova.virt.hardware [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.413 188781 DEBUG nova.virt.hardware [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.413 188781 DEBUG nova.virt.hardware [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.414 188781 DEBUG nova.virt.hardware [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.414 188781 DEBUG nova.virt.hardware [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.414 188781 DEBUG nova.virt.hardware [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.415 188781 DEBUG nova.virt.hardware [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.423 188781 DEBUG nova.privsep.utils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.425 188781 DEBUG nova.virt.libvirt.vif [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-19T20:12:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='e1a79c75-2fa3-410d-9c4c-91db3eeca51d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='59f01dee51a74ac1a9f82733f591827d',ramdisk_id='',reservation_id='r-gl1rzcqo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='e1a79c75-2fa3-410d-9c4c-91db3eeca51d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-19T20:12:41Z,user_data=None,user_id='9f5597a45dc34ee19bcfe938afde768f',uuid=5aaac42d-946d-4c6f-9bde-23b8b6613b59,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "10027d6c-43cc-4a7c-be42-a49c8c914f25", "address": "fa:16:3e:e4:9e:14", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.193", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10027d6c-43", "ovs_interfaceid": "10027d6c-43cc-4a7c-be42-a49c8c914f25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.426 188781 DEBUG nova.network.os_vif_util [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Converting VIF {"id": "10027d6c-43cc-4a7c-be42-a49c8c914f25", "address": "fa:16:3e:e4:9e:14", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.193", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10027d6c-43", "ovs_interfaceid": "10027d6c-43cc-4a7c-be42-a49c8c914f25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.427 188781 DEBUG nova.network.os_vif_util [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:9e:14,bridge_name='br-int',has_traffic_filtering=True,id=10027d6c-43cc-4a7c-be42-a49c8c914f25,network=Network(ec82c3b7-5389-43ab-a939-ce6cd12f9681),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10027d6c-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.432 188781 DEBUG nova.objects.instance [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lazy-loading 'pci_devices' on Instance uuid 5aaac42d-946d-4c6f-9bde-23b8b6613b59 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:12:45 compute-0 podman[242072]: 2026-02-19 20:12:45.446638348 +0000 UTC m=+0.120081736 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Feb 19 20:12:45 compute-0 podman[242071]: 2026-02-19 20:12:45.447403073 +0000 UTC m=+0.132793015 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, config_id=kepler, release-0.7.12=, io.openshift.tags=base rhel9, release=1214.1726694543, vendor=Red Hat, Inc., name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.4)
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.447 188781 DEBUG nova.virt.libvirt.driver [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] End _get_guest_xml xml=<domain type="kvm">
Feb 19 20:12:45 compute-0 nova_compute[188777]:   <uuid>5aaac42d-946d-4c6f-9bde-23b8b6613b59</uuid>
Feb 19 20:12:45 compute-0 nova_compute[188777]:   <name>instance-00000001</name>
Feb 19 20:12:45 compute-0 nova_compute[188777]:   <memory>524288</memory>
Feb 19 20:12:45 compute-0 nova_compute[188777]:   <vcpu>1</vcpu>
Feb 19 20:12:45 compute-0 nova_compute[188777]:   <metadata>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 19 20:12:45 compute-0 nova_compute[188777]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:       <nova:name>test_0</nova:name>
Feb 19 20:12:45 compute-0 nova_compute[188777]:       <nova:creationTime>2026-02-19 20:12:45</nova:creationTime>
Feb 19 20:12:45 compute-0 nova_compute[188777]:       <nova:flavor name="m1.small">
Feb 19 20:12:45 compute-0 nova_compute[188777]:         <nova:memory>512</nova:memory>
Feb 19 20:12:45 compute-0 nova_compute[188777]:         <nova:disk>1</nova:disk>
Feb 19 20:12:45 compute-0 nova_compute[188777]:         <nova:swap>0</nova:swap>
Feb 19 20:12:45 compute-0 nova_compute[188777]:         <nova:ephemeral>1</nova:ephemeral>
Feb 19 20:12:45 compute-0 nova_compute[188777]:         <nova:vcpus>1</nova:vcpus>
Feb 19 20:12:45 compute-0 nova_compute[188777]:       </nova:flavor>
Feb 19 20:12:45 compute-0 nova_compute[188777]:       <nova:owner>
Feb 19 20:12:45 compute-0 nova_compute[188777]:         <nova:user uuid="9f5597a45dc34ee19bcfe938afde768f">admin</nova:user>
Feb 19 20:12:45 compute-0 nova_compute[188777]:         <nova:project uuid="59f01dee51a74ac1a9f82733f591827d">admin</nova:project>
Feb 19 20:12:45 compute-0 nova_compute[188777]:       </nova:owner>
Feb 19 20:12:45 compute-0 nova_compute[188777]:       <nova:root type="image" uuid="e1a79c75-2fa3-410d-9c4c-91db3eeca51d"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:       <nova:ports>
Feb 19 20:12:45 compute-0 nova_compute[188777]:         <nova:port uuid="10027d6c-43cc-4a7c-be42-a49c8c914f25">
Feb 19 20:12:45 compute-0 nova_compute[188777]:           <nova:ip type="fixed" address="192.168.0.193" ipVersion="4"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:         </nova:port>
Feb 19 20:12:45 compute-0 nova_compute[188777]:       </nova:ports>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     </nova:instance>
Feb 19 20:12:45 compute-0 nova_compute[188777]:   </metadata>
Feb 19 20:12:45 compute-0 nova_compute[188777]:   <sysinfo type="smbios">
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <system>
Feb 19 20:12:45 compute-0 nova_compute[188777]:       <entry name="manufacturer">RDO</entry>
Feb 19 20:12:45 compute-0 nova_compute[188777]:       <entry name="product">OpenStack Compute</entry>
Feb 19 20:12:45 compute-0 nova_compute[188777]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 19 20:12:45 compute-0 nova_compute[188777]:       <entry name="serial">5aaac42d-946d-4c6f-9bde-23b8b6613b59</entry>
Feb 19 20:12:45 compute-0 nova_compute[188777]:       <entry name="uuid">5aaac42d-946d-4c6f-9bde-23b8b6613b59</entry>
Feb 19 20:12:45 compute-0 nova_compute[188777]:       <entry name="family">Virtual Machine</entry>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     </system>
Feb 19 20:12:45 compute-0 nova_compute[188777]:   </sysinfo>
Feb 19 20:12:45 compute-0 nova_compute[188777]:   <os>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <boot dev="hd"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <smbios mode="sysinfo"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:   </os>
Feb 19 20:12:45 compute-0 nova_compute[188777]:   <features>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <acpi/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <apic/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <vmcoreinfo/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:   </features>
Feb 19 20:12:45 compute-0 nova_compute[188777]:   <clock offset="utc">
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <timer name="pit" tickpolicy="delay"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <timer name="hpet" present="no"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:   </clock>
Feb 19 20:12:45 compute-0 nova_compute[188777]:   <cpu mode="host-model" match="exact">
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <topology sockets="1" cores="1" threads="1"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:   </cpu>
Feb 19 20:12:45 compute-0 nova_compute[188777]:   <devices>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <disk type="file" device="disk">
Feb 19 20:12:45 compute-0 nova_compute[188777]:       <driver name="qemu" type="qcow2" cache="none"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:       <source file="/var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:       <target dev="vda" bus="virtio"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     </disk>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <disk type="file" device="disk">
Feb 19 20:12:45 compute-0 nova_compute[188777]:       <driver name="qemu" type="qcow2" cache="none"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:       <source file="/var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:       <target dev="vdb" bus="virtio"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     </disk>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <disk type="file" device="cdrom">
Feb 19 20:12:45 compute-0 nova_compute[188777]:       <driver name="qemu" type="raw" cache="none"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:       <source file="/var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.config"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:       <target dev="sda" bus="sata"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     </disk>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <interface type="ethernet">
Feb 19 20:12:45 compute-0 nova_compute[188777]:       <mac address="fa:16:3e:e4:9e:14"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:       <model type="virtio"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:       <driver name="vhost" rx_queue_size="512"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:       <mtu size="1442"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:       <target dev="tap10027d6c-43"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     </interface>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <serial type="pty">
Feb 19 20:12:45 compute-0 nova_compute[188777]:       <log file="/var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/console.log" append="off"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     </serial>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <video>
Feb 19 20:12:45 compute-0 nova_compute[188777]:       <model type="virtio"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     </video>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <input type="tablet" bus="usb"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <rng model="virtio">
Feb 19 20:12:45 compute-0 nova_compute[188777]:       <backend model="random">/dev/urandom</backend>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     </rng>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <controller type="usb" index="0"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     <memballoon model="virtio">
Feb 19 20:12:45 compute-0 nova_compute[188777]:       <stats period="10"/>
Feb 19 20:12:45 compute-0 nova_compute[188777]:     </memballoon>
Feb 19 20:12:45 compute-0 nova_compute[188777]:   </devices>
Feb 19 20:12:45 compute-0 nova_compute[188777]: </domain>
Feb 19 20:12:45 compute-0 nova_compute[188777]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.448 188781 DEBUG nova.compute.manager [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Preparing to wait for external event network-vif-plugged-10027d6c-43cc-4a7c-be42-a49c8c914f25 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.448 188781 DEBUG oslo_concurrency.lockutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "5aaac42d-946d-4c6f-9bde-23b8b6613b59-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.448 188781 DEBUG oslo_concurrency.lockutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "5aaac42d-946d-4c6f-9bde-23b8b6613b59-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.449 188781 DEBUG oslo_concurrency.lockutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "5aaac42d-946d-4c6f-9bde-23b8b6613b59-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.449 188781 DEBUG nova.virt.libvirt.vif [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-19T20:12:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='e1a79c75-2fa3-410d-9c4c-91db3eeca51d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='59f01dee51a74ac1a9f82733f591827d',ramdisk_id='',reservation_id='r-gl1rzcqo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='e1a79c75-2fa3-410d-9c4c-91db3eeca51d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-19T20:12:41Z,user_data=None,user_id='9f5597a45dc34ee19bcfe938afde768f',uuid=5aaac42d-946d-4c6f-9bde-23b8b6613b59,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "10027d6c-43cc-4a7c-be42-a49c8c914f25", "address": "fa:16:3e:e4:9e:14", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.193", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10027d6c-43", "ovs_interfaceid": "10027d6c-43cc-4a7c-be42-a49c8c914f25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.450 188781 DEBUG nova.network.os_vif_util [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Converting VIF {"id": "10027d6c-43cc-4a7c-be42-a49c8c914f25", "address": "fa:16:3e:e4:9e:14", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.193", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10027d6c-43", "ovs_interfaceid": "10027d6c-43cc-4a7c-be42-a49c8c914f25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.450 188781 DEBUG nova.network.os_vif_util [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:9e:14,bridge_name='br-int',has_traffic_filtering=True,id=10027d6c-43cc-4a7c-be42-a49c8c914f25,network=Network(ec82c3b7-5389-43ab-a939-ce6cd12f9681),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10027d6c-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.451 188781 DEBUG os_vif [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:9e:14,bridge_name='br-int',has_traffic_filtering=True,id=10027d6c-43cc-4a7c-be42-a49c8c914f25,network=Network(ec82c3b7-5389-43ab-a939-ce6cd12f9681),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10027d6c-43') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.496 188781 DEBUG ovsdbapp.backend.ovs_idl [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.497 188781 DEBUG ovsdbapp.backend.ovs_idl [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.497 188781 DEBUG ovsdbapp.backend.ovs_idl [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.498 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.499 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [POLLOUT] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.499 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.500 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.503 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.506 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.524 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.525 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.525 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 19 20:12:45 compute-0 nova_compute[188777]: 2026-02-19 20:12:45.527 188781 INFO oslo.privsep.daemon [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpb0olcwit/privsep.sock']
Feb 19 20:12:46 compute-0 nova_compute[188777]: 2026-02-19 20:12:46.197 188781 INFO oslo.privsep.daemon [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Spawned new privsep daemon via rootwrap
Feb 19 20:12:46 compute-0 nova_compute[188777]: 2026-02-19 20:12:46.044 242113 INFO oslo.privsep.daemon [-] privsep daemon starting
Feb 19 20:12:46 compute-0 nova_compute[188777]: 2026-02-19 20:12:46.050 242113 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Feb 19 20:12:46 compute-0 nova_compute[188777]: 2026-02-19 20:12:46.053 242113 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Feb 19 20:12:46 compute-0 nova_compute[188777]: 2026-02-19 20:12:46.054 242113 INFO oslo.privsep.daemon [-] privsep daemon running as pid 242113
Feb 19 20:12:46 compute-0 nova_compute[188777]: 2026-02-19 20:12:46.299 188781 DEBUG nova.network.neutron [req-1790ce4f-a6b4-49f0-989d-15b86221a07e req-2b10bfb4-2aa6-4a41-a547-d95bbdceadcb 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Updated VIF entry in instance network info cache for port 10027d6c-43cc-4a7c-be42-a49c8c914f25. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 19 20:12:46 compute-0 nova_compute[188777]: 2026-02-19 20:12:46.300 188781 DEBUG nova.network.neutron [req-1790ce4f-a6b4-49f0-989d-15b86221a07e req-2b10bfb4-2aa6-4a41-a547-d95bbdceadcb 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Updating instance_info_cache with network_info: [{"id": "10027d6c-43cc-4a7c-be42-a49c8c914f25", "address": "fa:16:3e:e4:9e:14", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.193", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10027d6c-43", "ovs_interfaceid": "10027d6c-43cc-4a7c-be42-a49c8c914f25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:12:46 compute-0 nova_compute[188777]: 2026-02-19 20:12:46.319 188781 DEBUG oslo_concurrency.lockutils [req-1790ce4f-a6b4-49f0-989d-15b86221a07e req-2b10bfb4-2aa6-4a41-a547-d95bbdceadcb 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Releasing lock "refresh_cache-5aaac42d-946d-4c6f-9bde-23b8b6613b59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:12:46 compute-0 nova_compute[188777]: 2026-02-19 20:12:46.500 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:12:46 compute-0 nova_compute[188777]: 2026-02-19 20:12:46.501 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap10027d6c-43, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:12:46 compute-0 nova_compute[188777]: 2026-02-19 20:12:46.501 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap10027d6c-43, col_values=(('external_ids', {'iface-id': '10027d6c-43cc-4a7c-be42-a49c8c914f25', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e4:9e:14', 'vm-uuid': '5aaac42d-946d-4c6f-9bde-23b8b6613b59'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:12:46 compute-0 nova_compute[188777]: 2026-02-19 20:12:46.505 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:12:46 compute-0 NetworkManager[57033]: <info>  [1771531966.5064] manager: (tap10027d6c-43): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Feb 19 20:12:46 compute-0 nova_compute[188777]: 2026-02-19 20:12:46.509 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 19 20:12:46 compute-0 nova_compute[188777]: 2026-02-19 20:12:46.514 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:12:46 compute-0 nova_compute[188777]: 2026-02-19 20:12:46.515 188781 INFO os_vif [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:9e:14,bridge_name='br-int',has_traffic_filtering=True,id=10027d6c-43cc-4a7c-be42-a49c8c914f25,network=Network(ec82c3b7-5389-43ab-a939-ce6cd12f9681),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10027d6c-43')
Feb 19 20:12:46 compute-0 nova_compute[188777]: 2026-02-19 20:12:46.583 188781 DEBUG nova.virt.libvirt.driver [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 19 20:12:46 compute-0 nova_compute[188777]: 2026-02-19 20:12:46.583 188781 DEBUG nova.virt.libvirt.driver [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 19 20:12:46 compute-0 nova_compute[188777]: 2026-02-19 20:12:46.583 188781 DEBUG nova.virt.libvirt.driver [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 19 20:12:46 compute-0 nova_compute[188777]: 2026-02-19 20:12:46.584 188781 DEBUG nova.virt.libvirt.driver [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] No VIF found with MAC fa:16:3e:e4:9e:14, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 19 20:12:46 compute-0 nova_compute[188777]: 2026-02-19 20:12:46.584 188781 INFO nova.virt.libvirt.driver [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Using config drive
Feb 19 20:12:47 compute-0 nova_compute[188777]: 2026-02-19 20:12:47.594 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:12:47 compute-0 nova_compute[188777]: 2026-02-19 20:12:47.747 188781 INFO nova.virt.libvirt.driver [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Creating config drive at /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.config
Feb 19 20:12:47 compute-0 nova_compute[188777]: 2026-02-19 20:12:47.753 188781 DEBUG oslo_concurrency.processutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp9h5j00ko execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:12:47 compute-0 nova_compute[188777]: 2026-02-19 20:12:47.879 188781 DEBUG oslo_concurrency.processutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp9h5j00ko" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:12:47 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Feb 19 20:12:47 compute-0 kernel: tap10027d6c-43: entered promiscuous mode
Feb 19 20:12:48 compute-0 nova_compute[188777]: 2026-02-19 20:12:47.999 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:12:48 compute-0 ovn_controller[98843]: 2026-02-19T20:12:48Z|00027|binding|INFO|Claiming lport 10027d6c-43cc-4a7c-be42-a49c8c914f25 for this chassis.
Feb 19 20:12:48 compute-0 ovn_controller[98843]: 2026-02-19T20:12:48Z|00028|binding|INFO|10027d6c-43cc-4a7c-be42-a49c8c914f25: Claiming fa:16:3e:e4:9e:14 192.168.0.193
Feb 19 20:12:48 compute-0 NetworkManager[57033]: <info>  [1771531968.0022] manager: (tap10027d6c-43): new Tun device (/org/freedesktop/NetworkManager/Devices/20)
Feb 19 20:12:48 compute-0 nova_compute[188777]: 2026-02-19 20:12:48.006 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:12:48 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:48.017 108175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:9e:14 192.168.0.193'], port_security=['fa:16:3e:e4:9e:14 192.168.0.193'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.193/24', 'neutron:device_id': '5aaac42d-946d-4c6f-9bde-23b8b6613b59', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec82c3b7-5389-43ab-a939-ce6cd12f9681', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '59f01dee51a74ac1a9f82733f591827d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '46d7cf50-a73c-415e-96c4-398ffee7ce2d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61958255-2fb8-4c55-809a-ee04d4cf034a, chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>], logical_port=10027d6c-43cc-4a7c-be42-a49c8c914f25) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 19 20:12:48 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:48.021 108175 INFO neutron.agent.ovn.metadata.agent [-] Port 10027d6c-43cc-4a7c-be42-a49c8c914f25 in datapath ec82c3b7-5389-43ab-a939-ce6cd12f9681 bound to our chassis
Feb 19 20:12:48 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:48.026 108175 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec82c3b7-5389-43ab-a939-ce6cd12f9681
Feb 19 20:12:48 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:48.029 108175 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmp731bwa3g/privsep.sock']
Feb 19 20:12:48 compute-0 ovn_controller[98843]: 2026-02-19T20:12:48Z|00029|binding|INFO|Setting lport 10027d6c-43cc-4a7c-be42-a49c8c914f25 ovn-installed in OVS
Feb 19 20:12:48 compute-0 ovn_controller[98843]: 2026-02-19T20:12:48Z|00030|binding|INFO|Setting lport 10027d6c-43cc-4a7c-be42-a49c8c914f25 up in Southbound
Feb 19 20:12:48 compute-0 nova_compute[188777]: 2026-02-19 20:12:48.050 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:12:48 compute-0 systemd-udevd[242145]: Network interface NamePolicy= disabled on kernel command line.
Feb 19 20:12:48 compute-0 NetworkManager[57033]: <info>  [1771531968.0779] device (tap10027d6c-43): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 19 20:12:48 compute-0 NetworkManager[57033]: <info>  [1771531968.0788] device (tap10027d6c-43): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 19 20:12:48 compute-0 systemd-machined[158158]: New machine qemu-1-instance-00000001.
Feb 19 20:12:48 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Feb 19 20:12:48 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:48.699 108175 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Feb 19 20:12:48 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:48.703 108175 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp731bwa3g/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Feb 19 20:12:48 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:48.546 242160 INFO oslo.privsep.daemon [-] privsep daemon starting
Feb 19 20:12:48 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:48.553 242160 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Feb 19 20:12:48 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:48.557 242160 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Feb 19 20:12:48 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:48.558 242160 INFO oslo.privsep.daemon [-] privsep daemon running as pid 242160
Feb 19 20:12:48 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:48.711 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[6175472b-6ad8-46a3-84b3-9f77ce257dff]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:12:49 compute-0 nova_compute[188777]: 2026-02-19 20:12:49.133 188781 DEBUG nova.compute.manager [req-f540ce96-6763-4fa0-8658-e2798d160c90 req-f4e5a93f-d3ab-4c43-917b-e786b2329708 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Received event network-vif-plugged-10027d6c-43cc-4a7c-be42-a49c8c914f25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:12:49 compute-0 nova_compute[188777]: 2026-02-19 20:12:49.135 188781 DEBUG oslo_concurrency.lockutils [req-f540ce96-6763-4fa0-8658-e2798d160c90 req-f4e5a93f-d3ab-4c43-917b-e786b2329708 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "5aaac42d-946d-4c6f-9bde-23b8b6613b59-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:12:49 compute-0 nova_compute[188777]: 2026-02-19 20:12:49.136 188781 DEBUG oslo_concurrency.lockutils [req-f540ce96-6763-4fa0-8658-e2798d160c90 req-f4e5a93f-d3ab-4c43-917b-e786b2329708 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "5aaac42d-946d-4c6f-9bde-23b8b6613b59-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:12:49 compute-0 nova_compute[188777]: 2026-02-19 20:12:49.137 188781 DEBUG oslo_concurrency.lockutils [req-f540ce96-6763-4fa0-8658-e2798d160c90 req-f4e5a93f-d3ab-4c43-917b-e786b2329708 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "5aaac42d-946d-4c6f-9bde-23b8b6613b59-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:12:49 compute-0 nova_compute[188777]: 2026-02-19 20:12:49.138 188781 DEBUG nova.compute.manager [req-f540ce96-6763-4fa0-8658-e2798d160c90 req-f4e5a93f-d3ab-4c43-917b-e786b2329708 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Processing event network-vif-plugged-10027d6c-43cc-4a7c-be42-a49c8c914f25 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 19 20:12:49 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:49.217 242160 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:12:49 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:49.218 242160 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:12:49 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:49.218 242160 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:12:49 compute-0 nova_compute[188777]: 2026-02-19 20:12:49.440 188781 DEBUG nova.compute.manager [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 19 20:12:49 compute-0 nova_compute[188777]: 2026-02-19 20:12:49.441 188781 DEBUG nova.virt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Emitting event <LifecycleEvent: 1771531969.4394212, 5aaac42d-946d-4c6f-9bde-23b8b6613b59 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:12:49 compute-0 nova_compute[188777]: 2026-02-19 20:12:49.441 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] VM Started (Lifecycle Event)
Feb 19 20:12:49 compute-0 nova_compute[188777]: 2026-02-19 20:12:49.459 188781 DEBUG nova.virt.libvirt.driver [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 19 20:12:49 compute-0 nova_compute[188777]: 2026-02-19 20:12:49.465 188781 INFO nova.virt.libvirt.driver [-] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Instance spawned successfully.
Feb 19 20:12:49 compute-0 nova_compute[188777]: 2026-02-19 20:12:49.465 188781 DEBUG nova.virt.libvirt.driver [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 19 20:12:49 compute-0 nova_compute[188777]: 2026-02-19 20:12:49.487 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:12:49 compute-0 nova_compute[188777]: 2026-02-19 20:12:49.494 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 19 20:12:49 compute-0 nova_compute[188777]: 2026-02-19 20:12:49.500 188781 DEBUG nova.virt.libvirt.driver [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:12:49 compute-0 nova_compute[188777]: 2026-02-19 20:12:49.500 188781 DEBUG nova.virt.libvirt.driver [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:12:49 compute-0 nova_compute[188777]: 2026-02-19 20:12:49.501 188781 DEBUG nova.virt.libvirt.driver [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:12:49 compute-0 nova_compute[188777]: 2026-02-19 20:12:49.501 188781 DEBUG nova.virt.libvirt.driver [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:12:49 compute-0 nova_compute[188777]: 2026-02-19 20:12:49.502 188781 DEBUG nova.virt.libvirt.driver [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:12:49 compute-0 nova_compute[188777]: 2026-02-19 20:12:49.502 188781 DEBUG nova.virt.libvirt.driver [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:12:49 compute-0 nova_compute[188777]: 2026-02-19 20:12:49.521 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 19 20:12:49 compute-0 nova_compute[188777]: 2026-02-19 20:12:49.522 188781 DEBUG nova.virt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Emitting event <LifecycleEvent: 1771531969.4395802, 5aaac42d-946d-4c6f-9bde-23b8b6613b59 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:12:49 compute-0 nova_compute[188777]: 2026-02-19 20:12:49.522 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] VM Paused (Lifecycle Event)
Feb 19 20:12:49 compute-0 nova_compute[188777]: 2026-02-19 20:12:49.540 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:12:49 compute-0 nova_compute[188777]: 2026-02-19 20:12:49.546 188781 DEBUG nova.virt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Emitting event <LifecycleEvent: 1771531969.4508014, 5aaac42d-946d-4c6f-9bde-23b8b6613b59 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:12:49 compute-0 nova_compute[188777]: 2026-02-19 20:12:49.546 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] VM Resumed (Lifecycle Event)
Feb 19 20:12:49 compute-0 nova_compute[188777]: 2026-02-19 20:12:49.572 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:12:49 compute-0 nova_compute[188777]: 2026-02-19 20:12:49.578 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 19 20:12:49 compute-0 nova_compute[188777]: 2026-02-19 20:12:49.587 188781 INFO nova.compute.manager [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Took 8.35 seconds to spawn the instance on the hypervisor.
Feb 19 20:12:49 compute-0 nova_compute[188777]: 2026-02-19 20:12:49.588 188781 DEBUG nova.compute.manager [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:12:49 compute-0 nova_compute[188777]: 2026-02-19 20:12:49.599 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 19 20:12:49 compute-0 nova_compute[188777]: 2026-02-19 20:12:49.648 188781 INFO nova.compute.manager [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Took 8.78 seconds to build instance.
Feb 19 20:12:49 compute-0 nova_compute[188777]: 2026-02-19 20:12:49.666 188781 DEBUG oslo_concurrency.lockutils [None req-37b8dd0c-658d-4f95-a793-911c1c0425df 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "5aaac42d-946d-4c6f-9bde-23b8b6613b59" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.880s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:12:49 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:49.811 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[8c5d29d0-d2d4-4195-83fa-9b55b3b97c5d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:12:49 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:49.812 108175 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapec82c3b7-51 in ovnmeta-ec82c3b7-5389-43ab-a939-ce6cd12f9681 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 19 20:12:49 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:49.815 242160 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapec82c3b7-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 19 20:12:49 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:49.815 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[424af4b9-dd2d-40f6-b001-0b4097af7ae1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:12:49 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:49.819 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[133ef714-4ade-4955-9836-3bfbc2e88d59]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:12:49 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:49.860 108698 DEBUG oslo.privsep.daemon [-] privsep: reply[bf069480-12d1-4882-a48e-0c8d9b03fbc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:12:49 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:49.891 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[587682d2-9363-4855-8043-454be93996e0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:12:49 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:49.893 108175 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpx6d4t9_m/privsep.sock']
Feb 19 20:12:49 compute-0 podman[242177]: 2026-02-19 20:12:49.95204943 +0000 UTC m=+0.064844599 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 19 20:12:50 compute-0 systemd[1]: Starting libvirt proxy daemon...
Feb 19 20:12:50 compute-0 systemd[1]: Started libvirt proxy daemon.
Feb 19 20:12:50 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:50.611 108175 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Feb 19 20:12:50 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:50.615 108175 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpx6d4t9_m/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Feb 19 20:12:50 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:50.418 242224 INFO oslo.privsep.daemon [-] privsep daemon starting
Feb 19 20:12:50 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:50.423 242224 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Feb 19 20:12:50 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:50.425 242224 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Feb 19 20:12:50 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:50.426 242224 INFO oslo.privsep.daemon [-] privsep daemon running as pid 242224
Feb 19 20:12:50 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:50.622 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[d2f12719-849a-498e-ba84-618d5ea0333c]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:12:51 compute-0 nova_compute[188777]: 2026-02-19 20:12:51.229 188781 DEBUG nova.compute.manager [req-895b34c9-fb35-42cd-8f43-c2ef6def6343 req-0b29930a-1b92-4459-bbdb-4b3b2bde38e8 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Received event network-vif-plugged-10027d6c-43cc-4a7c-be42-a49c8c914f25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:12:51 compute-0 nova_compute[188777]: 2026-02-19 20:12:51.231 188781 DEBUG oslo_concurrency.lockutils [req-895b34c9-fb35-42cd-8f43-c2ef6def6343 req-0b29930a-1b92-4459-bbdb-4b3b2bde38e8 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "5aaac42d-946d-4c6f-9bde-23b8b6613b59-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:12:51 compute-0 nova_compute[188777]: 2026-02-19 20:12:51.231 188781 DEBUG oslo_concurrency.lockutils [req-895b34c9-fb35-42cd-8f43-c2ef6def6343 req-0b29930a-1b92-4459-bbdb-4b3b2bde38e8 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "5aaac42d-946d-4c6f-9bde-23b8b6613b59-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:12:51 compute-0 nova_compute[188777]: 2026-02-19 20:12:51.232 188781 DEBUG oslo_concurrency.lockutils [req-895b34c9-fb35-42cd-8f43-c2ef6def6343 req-0b29930a-1b92-4459-bbdb-4b3b2bde38e8 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "5aaac42d-946d-4c6f-9bde-23b8b6613b59-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:12:51 compute-0 nova_compute[188777]: 2026-02-19 20:12:51.232 188781 DEBUG nova.compute.manager [req-895b34c9-fb35-42cd-8f43-c2ef6def6343 req-0b29930a-1b92-4459-bbdb-4b3b2bde38e8 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] No waiting events found dispatching network-vif-plugged-10027d6c-43cc-4a7c-be42-a49c8c914f25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 19 20:12:51 compute-0 nova_compute[188777]: 2026-02-19 20:12:51.233 188781 WARNING nova.compute.manager [req-895b34c9-fb35-42cd-8f43-c2ef6def6343 req-0b29930a-1b92-4459-bbdb-4b3b2bde38e8 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Received unexpected event network-vif-plugged-10027d6c-43cc-4a7c-be42-a49c8c914f25 for instance with vm_state active and task_state None.
Feb 19 20:12:51 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:51.243 242224 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:12:51 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:51.244 242224 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:12:51 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:51.244 242224 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:12:51 compute-0 nova_compute[188777]: 2026-02-19 20:12:51.505 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:12:51 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:51.882 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[bebca2e7-e552-4d4c-87a3-99f8d786736c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:12:51 compute-0 NetworkManager[57033]: <info>  [1771531971.9086] manager: (tapec82c3b7-50): new Veth device (/org/freedesktop/NetworkManager/Devices/21)
Feb 19 20:12:51 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:51.905 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[ea2cf340-3129-4546-81f3-4563b3653259]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:12:51 compute-0 systemd-udevd[242236]: Network interface NamePolicy= disabled on kernel command line.
Feb 19 20:12:51 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:51.951 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[56c17a9d-da1f-4072-b554-918e64354a11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:12:51 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:51.954 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[ffe5fe3f-43e5-4b4f-b1b1-f01dc986fc4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:12:51 compute-0 NetworkManager[57033]: <info>  [1771531971.9826] device (tapec82c3b7-50): carrier: link connected
Feb 19 20:12:51 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:51.986 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[2f1280c4-7328-4c73-9ac7-44346bf68aba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:12:52 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:52.007 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[21fb5309-5cc5-45f9-955a-d69c859fe0fb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec82c3b7-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8a:e7:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 348344, 'reachable_time': 38775, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242254, 'error': None, 'target': 'ovnmeta-ec82c3b7-5389-43ab-a939-ce6cd12f9681', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:12:52 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:52.040 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[2afd613d-93b3-4563-a450-82ea2b3afd73]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8a:e7d1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 348344, 'tstamp': 348344}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242255, 'error': None, 'target': 'ovnmeta-ec82c3b7-5389-43ab-a939-ce6cd12f9681', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:12:52 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:52.080 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[f14a2843-7857-4945-b697-5f719ad8e61b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec82c3b7-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8a:e7:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 348344, 'reachable_time': 38775, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 242256, 'error': None, 'target': 'ovnmeta-ec82c3b7-5389-43ab-a939-ce6cd12f9681', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:12:52 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:52.117 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[c3d1eae6-5d8a-4d15-b27f-7a7d3791add5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:12:52 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:52.193 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[103b5c1a-271e-49ac-9629-58a0b51064e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:12:52 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:52.195 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec82c3b7-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:12:52 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:52.196 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 19 20:12:52 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:52.197 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec82c3b7-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:12:52 compute-0 kernel: tapec82c3b7-50: entered promiscuous mode
Feb 19 20:12:52 compute-0 NetworkManager[57033]: <info>  [1771531972.2040] manager: (tapec82c3b7-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Feb 19 20:12:52 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:52.205 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec82c3b7-50, col_values=(('external_ids', {'iface-id': 'a1c774de-4b7d-47b5-b88c-3f5d9b5c3dce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:12:52 compute-0 ovn_controller[98843]: 2026-02-19T20:12:52Z|00031|binding|INFO|Releasing lport a1c774de-4b7d-47b5-b88c-3f5d9b5c3dce from this chassis (sb_readonly=0)
Feb 19 20:12:52 compute-0 nova_compute[188777]: 2026-02-19 20:12:52.208 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:12:52 compute-0 nova_compute[188777]: 2026-02-19 20:12:52.224 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:12:52 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:52.227 108175 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ec82c3b7-5389-43ab-a939-ce6cd12f9681.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ec82c3b7-5389-43ab-a939-ce6cd12f9681.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 19 20:12:52 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:52.229 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[60049b55-4e66-4d99-abaa-3f085ac1eabd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:12:52 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:52.231 108175 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 19 20:12:52 compute-0 ovn_metadata_agent[108170]: global
Feb 19 20:12:52 compute-0 ovn_metadata_agent[108170]:     log         /dev/log local0 debug
Feb 19 20:12:52 compute-0 ovn_metadata_agent[108170]:     log-tag     haproxy-metadata-proxy-ec82c3b7-5389-43ab-a939-ce6cd12f9681
Feb 19 20:12:52 compute-0 ovn_metadata_agent[108170]:     user        root
Feb 19 20:12:52 compute-0 ovn_metadata_agent[108170]:     group       root
Feb 19 20:12:52 compute-0 ovn_metadata_agent[108170]:     maxconn     1024
Feb 19 20:12:52 compute-0 ovn_metadata_agent[108170]:     pidfile     /var/lib/neutron/external/pids/ec82c3b7-5389-43ab-a939-ce6cd12f9681.pid.haproxy
Feb 19 20:12:52 compute-0 ovn_metadata_agent[108170]:     daemon
Feb 19 20:12:52 compute-0 ovn_metadata_agent[108170]: 
Feb 19 20:12:52 compute-0 ovn_metadata_agent[108170]: defaults
Feb 19 20:12:52 compute-0 ovn_metadata_agent[108170]:     log global
Feb 19 20:12:52 compute-0 ovn_metadata_agent[108170]:     mode http
Feb 19 20:12:52 compute-0 ovn_metadata_agent[108170]:     option httplog
Feb 19 20:12:52 compute-0 ovn_metadata_agent[108170]:     option dontlognull
Feb 19 20:12:52 compute-0 ovn_metadata_agent[108170]:     option http-server-close
Feb 19 20:12:52 compute-0 ovn_metadata_agent[108170]:     option forwardfor
Feb 19 20:12:52 compute-0 ovn_metadata_agent[108170]:     retries                 3
Feb 19 20:12:52 compute-0 ovn_metadata_agent[108170]:     timeout http-request    30s
Feb 19 20:12:52 compute-0 ovn_metadata_agent[108170]:     timeout connect         30s
Feb 19 20:12:52 compute-0 ovn_metadata_agent[108170]:     timeout client          32s
Feb 19 20:12:52 compute-0 ovn_metadata_agent[108170]:     timeout server          32s
Feb 19 20:12:52 compute-0 ovn_metadata_agent[108170]:     timeout http-keep-alive 30s
Feb 19 20:12:52 compute-0 ovn_metadata_agent[108170]: 
Feb 19 20:12:52 compute-0 ovn_metadata_agent[108170]: 
Feb 19 20:12:52 compute-0 ovn_metadata_agent[108170]: listen listener
Feb 19 20:12:52 compute-0 ovn_metadata_agent[108170]:     bind 169.254.169.254:80
Feb 19 20:12:52 compute-0 ovn_metadata_agent[108170]:     server metadata /var/lib/neutron/metadata_proxy
Feb 19 20:12:52 compute-0 ovn_metadata_agent[108170]:     http-request add-header X-OVN-Network-ID ec82c3b7-5389-43ab-a939-ce6cd12f9681
Feb 19 20:12:52 compute-0 ovn_metadata_agent[108170]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 19 20:12:52 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:12:52.233 108175 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ec82c3b7-5389-43ab-a939-ce6cd12f9681', 'env', 'PROCESS_TAG=haproxy-ec82c3b7-5389-43ab-a939-ce6cd12f9681', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ec82c3b7-5389-43ab-a939-ce6cd12f9681.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 19 20:12:52 compute-0 nova_compute[188777]: 2026-02-19 20:12:52.596 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:12:52 compute-0 podman[242287]: 2026-02-19 20:12:52.633925509 +0000 UTC m=+0.058184411 container create 830ce657e1bdc94f9964229de9ef508d9426baa57e5efd6d846966a9e0ae99cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ec82c3b7-5389-43ab-a939-ce6cd12f9681, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 20:12:52 compute-0 systemd[1]: Started libpod-conmon-830ce657e1bdc94f9964229de9ef508d9426baa57e5efd6d846966a9e0ae99cd.scope.
Feb 19 20:12:52 compute-0 systemd[1]: Started libcrun container.
Feb 19 20:12:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f506d57a04a6731c7b4448db0995098917d3eb5a51f85b9ad86f8c5480cf39e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 19 20:12:52 compute-0 podman[242287]: 2026-02-19 20:12:52.606744548 +0000 UTC m=+0.031003470 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 19 20:12:52 compute-0 podman[242287]: 2026-02-19 20:12:52.723145139 +0000 UTC m=+0.147404061 container init 830ce657e1bdc94f9964229de9ef508d9426baa57e5efd6d846966a9e0ae99cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ec82c3b7-5389-43ab-a939-ce6cd12f9681, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 20:12:52 compute-0 podman[242287]: 2026-02-19 20:12:52.730802428 +0000 UTC m=+0.155061330 container start 830ce657e1bdc94f9964229de9ef508d9426baa57e5efd6d846966a9e0ae99cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ec82c3b7-5389-43ab-a939-ce6cd12f9681, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:12:52 compute-0 neutron-haproxy-ovnmeta-ec82c3b7-5389-43ab-a939-ce6cd12f9681[242302]: [NOTICE]   (242315) : New worker (242324) forked
Feb 19 20:12:52 compute-0 neutron-haproxy-ovnmeta-ec82c3b7-5389-43ab-a939-ce6cd12f9681[242302]: [NOTICE]   (242315) : Loading success.
Feb 19 20:12:52 compute-0 podman[242299]: 2026-02-19 20:12:52.78872811 +0000 UTC m=+0.113591744 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS)
Feb 19 20:12:56 compute-0 nova_compute[188777]: 2026-02-19 20:12:56.508 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:12:56 compute-0 podman[242333]: 2026-02-19 20:12:56.5169403 +0000 UTC m=+0.182254961 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 19 20:12:57 compute-0 nova_compute[188777]: 2026-02-19 20:12:57.601 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:12:59 compute-0 podman[204724]: time="2026-02-19T20:12:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:12:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:12:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 19 20:12:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:12:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4338 "" "Go-http-client/1.1"
Feb 19 20:13:01 compute-0 openstack_network_exporter[207898]: ERROR   20:13:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:13:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:13:01 compute-0 openstack_network_exporter[207898]: ERROR   20:13:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:13:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:13:01 compute-0 nova_compute[188777]: 2026-02-19 20:13:01.511 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:13:02 compute-0 nova_compute[188777]: 2026-02-19 20:13:02.607 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:13:06 compute-0 sshd-session[242361]: Invalid user bb from 158.174.210.161 port 11332
Feb 19 20:13:06 compute-0 nova_compute[188777]: 2026-02-19 20:13:06.516 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:13:06 compute-0 sshd-session[242361]: Received disconnect from 158.174.210.161 port 11332:11: Bye Bye [preauth]
Feb 19 20:13:06 compute-0 sshd-session[242361]: Disconnected from invalid user bb 158.174.210.161 port 11332 [preauth]
Feb 19 20:13:07 compute-0 ovn_controller[98843]: 2026-02-19T20:13:07Z|00032|binding|INFO|Releasing lport a1c774de-4b7d-47b5-b88c-3f5d9b5c3dce from this chassis (sb_readonly=0)
Feb 19 20:13:07 compute-0 NetworkManager[57033]: <info>  [1771531987.0063] manager: (patch-br-int-to-provnet-17fea16b-f680-4035-8a9a-3c1f8510dc9d): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/23)
Feb 19 20:13:07 compute-0 nova_compute[188777]: 2026-02-19 20:13:07.005 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:13:07 compute-0 NetworkManager[57033]: <info>  [1771531987.0087] device (patch-br-int-to-provnet-17fea16b-f680-4035-8a9a-3c1f8510dc9d)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 19 20:13:07 compute-0 NetworkManager[57033]: <warn>  [1771531987.0090] device (patch-br-int-to-provnet-17fea16b-f680-4035-8a9a-3c1f8510dc9d)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 19 20:13:07 compute-0 NetworkManager[57033]: <info>  [1771531987.0130] manager: (patch-provnet-17fea16b-f680-4035-8a9a-3c1f8510dc9d-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/24)
Feb 19 20:13:07 compute-0 NetworkManager[57033]: <info>  [1771531987.0150] device (patch-provnet-17fea16b-f680-4035-8a9a-3c1f8510dc9d-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 19 20:13:07 compute-0 NetworkManager[57033]: <warn>  [1771531987.0150] device (patch-provnet-17fea16b-f680-4035-8a9a-3c1f8510dc9d-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 19 20:13:07 compute-0 ovn_controller[98843]: 2026-02-19T20:13:07Z|00033|binding|INFO|Releasing lport a1c774de-4b7d-47b5-b88c-3f5d9b5c3dce from this chassis (sb_readonly=0)
Feb 19 20:13:07 compute-0 nova_compute[188777]: 2026-02-19 20:13:07.017 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:13:07 compute-0 NetworkManager[57033]: <info>  [1771531987.0192] manager: (patch-provnet-17fea16b-f680-4035-8a9a-3c1f8510dc9d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Feb 19 20:13:07 compute-0 NetworkManager[57033]: <info>  [1771531987.0218] manager: (patch-br-int-to-provnet-17fea16b-f680-4035-8a9a-3c1f8510dc9d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Feb 19 20:13:07 compute-0 NetworkManager[57033]: <info>  [1771531987.0238] device (patch-br-int-to-provnet-17fea16b-f680-4035-8a9a-3c1f8510dc9d)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Feb 19 20:13:07 compute-0 nova_compute[188777]: 2026-02-19 20:13:07.024 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:13:07 compute-0 NetworkManager[57033]: <info>  [1771531987.0258] device (patch-provnet-17fea16b-f680-4035-8a9a-3c1f8510dc9d-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Feb 19 20:13:07 compute-0 nova_compute[188777]: 2026-02-19 20:13:07.328 188781 DEBUG nova.compute.manager [req-e7ee7a48-47ba-43b7-bd3f-df139a170014 req-2d3a0954-bf06-49e2-a215-9029317f95b1 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Received event network-changed-10027d6c-43cc-4a7c-be42-a49c8c914f25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:13:07 compute-0 nova_compute[188777]: 2026-02-19 20:13:07.328 188781 DEBUG nova.compute.manager [req-e7ee7a48-47ba-43b7-bd3f-df139a170014 req-2d3a0954-bf06-49e2-a215-9029317f95b1 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Refreshing instance network info cache due to event network-changed-10027d6c-43cc-4a7c-be42-a49c8c914f25. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 19 20:13:07 compute-0 nova_compute[188777]: 2026-02-19 20:13:07.329 188781 DEBUG oslo_concurrency.lockutils [req-e7ee7a48-47ba-43b7-bd3f-df139a170014 req-2d3a0954-bf06-49e2-a215-9029317f95b1 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "refresh_cache-5aaac42d-946d-4c6f-9bde-23b8b6613b59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:13:07 compute-0 nova_compute[188777]: 2026-02-19 20:13:07.329 188781 DEBUG oslo_concurrency.lockutils [req-e7ee7a48-47ba-43b7-bd3f-df139a170014 req-2d3a0954-bf06-49e2-a215-9029317f95b1 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquired lock "refresh_cache-5aaac42d-946d-4c6f-9bde-23b8b6613b59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:13:07 compute-0 nova_compute[188777]: 2026-02-19 20:13:07.329 188781 DEBUG nova.network.neutron [req-e7ee7a48-47ba-43b7-bd3f-df139a170014 req-2d3a0954-bf06-49e2-a215-9029317f95b1 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Refreshing network info cache for port 10027d6c-43cc-4a7c-be42-a49c8c914f25 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 19 20:13:07 compute-0 nova_compute[188777]: 2026-02-19 20:13:07.608 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:13:08 compute-0 podman[242364]: 2026-02-19 20:13:08.412577418 +0000 UTC m=+0.101476490 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, version=9.7, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.buildah.version=1.33.7, container_name=openstack_network_exporter, managed_by=edpm_ansible, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Feb 19 20:13:08 compute-0 podman[242365]: 2026-02-19 20:13:08.422707169 +0000 UTC m=+0.101852032 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 19 20:13:08 compute-0 nova_compute[188777]: 2026-02-19 20:13:08.429 188781 DEBUG nova.network.neutron [req-e7ee7a48-47ba-43b7-bd3f-df139a170014 req-2d3a0954-bf06-49e2-a215-9029317f95b1 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Updated VIF entry in instance network info cache for port 10027d6c-43cc-4a7c-be42-a49c8c914f25. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 19 20:13:08 compute-0 nova_compute[188777]: 2026-02-19 20:13:08.432 188781 DEBUG nova.network.neutron [req-e7ee7a48-47ba-43b7-bd3f-df139a170014 req-2d3a0954-bf06-49e2-a215-9029317f95b1 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Updating instance_info_cache with network_info: [{"id": "10027d6c-43cc-4a7c-be42-a49c8c914f25", "address": "fa:16:3e:e4:9e:14", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.193", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10027d6c-43", "ovs_interfaceid": "10027d6c-43cc-4a7c-be42-a49c8c914f25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:13:08 compute-0 nova_compute[188777]: 2026-02-19 20:13:08.458 188781 DEBUG oslo_concurrency.lockutils [req-e7ee7a48-47ba-43b7-bd3f-df139a170014 req-2d3a0954-bf06-49e2-a215-9029317f95b1 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Releasing lock "refresh_cache-5aaac42d-946d-4c6f-9bde-23b8b6613b59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:13:11 compute-0 nova_compute[188777]: 2026-02-19 20:13:11.525 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:13:12 compute-0 nova_compute[188777]: 2026-02-19 20:13:12.611 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:13:13 compute-0 podman[242407]: 2026-02-19 20:13:13.404459389 +0000 UTC m=+0.085942018 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Feb 19 20:13:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:15.137 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 19 20:13:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:15.139 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 19 20:13:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:15.139 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:13:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:15.140 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fa4f6728800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:13:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:13:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:13:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:13:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:13:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:13:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:13:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:13:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:13:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:13:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a390>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:13:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:13:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:13:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:13:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:13:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:15.143 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:13:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:15.143 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:13:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:15.143 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:13:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:15.143 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:13:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:15.143 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:13:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:15.143 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:13:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:15.143 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:13:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:15.143 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:13:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:15.144 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:13:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:15.144 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:13:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:15.144 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8f0b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:13:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:15.151 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 5aaac42d-946d-4c6f-9bde-23b8b6613b59 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Feb 19 20:13:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:15.452 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/5aaac42d-946d-4c6f-9bde-23b8b6613b59 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}eb82bb0a04ff18fe5ce8169193b61d179e0542ea510a5cad5008c259e31f58a8" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.061 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1850 Content-Type: application/json Date: Thu, 19 Feb 2026 20:13:15 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-61363465-38ec-440a-a7f0-d2d4928c8c0d x-openstack-request-id: req-61363465-38ec-440a-a7f0-d2d4928c8c0d _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.061 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "5aaac42d-946d-4c6f-9bde-23b8b6613b59", "name": "test_0", "status": "ACTIVE", "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "user_id": "9f5597a45dc34ee19bcfe938afde768f", "metadata": {}, "hostId": "fd9f80e206ee2256ddb900effab6d3e51f96886da6d1a8f886ddbab7", "image": {"id": "e1a79c75-2fa3-410d-9c4c-91db3eeca51d", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/e1a79c75-2fa3-410d-9c4c-91db3eeca51d"}]}, "flavor": {"id": "8030bc1a-9afb-4678-ac07-8b59a1275925", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/8030bc1a-9afb-4678-ac07-8b59a1275925"}]}, "created": "2026-02-19T20:12:38Z", "updated": "2026-02-19T20:12:49Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.193", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:e4:9e:14"}, {"version": 4, "addr": "192.168.122.219", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:e4:9e:14"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/5aaac42d-946d-4c6f-9bde-23b8b6613b59"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/5aaac42d-946d-4c6f-9bde-23b8b6613b59"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-02-19T20:12:49.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.061 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/5aaac42d-946d-4c6f-9bde-23b8b6613b59 used request id req-61363465-38ec-440a-a7f0-d2d4928c8c0d request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.064 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '5aaac42d-946d-4c6f-9bde-23b8b6613b59', 'name': 'test_0', 'flavor': {'id': '8030bc1a-9afb-4678-ac07-8b59a1275925', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e1a79c75-2fa3-410d-9c4c-91db3eeca51d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '59f01dee51a74ac1a9f82733f591827d', 'user_id': '9f5597a45dc34ee19bcfe938afde768f', 'hostId': 'fd9f80e206ee2256ddb900effab6d3e51f96886da6d1a8f886ddbab7', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.064 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.064 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.064 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.065 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.067 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-19T20:13:16.064688) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.076 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 5aaac42d-946d-4c6f-9bde-23b8b6613b59 / tap10027d6c-43 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.076 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.078 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.078 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fa4f672a480>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.078 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.079 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.079 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.079 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.079 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.080 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-02-19T20:13:16.079232) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.079 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>]
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.081 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fa4f672a180>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.081 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.081 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.081 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.081 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.081 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.082 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.082 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fa4f672bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.082 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.082 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.082 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.082 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.082 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.083 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.083 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fa4f672a270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.083 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.083 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.083 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.084 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.084 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.084 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.084 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fa4f6728ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.085 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.085 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.085 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.085 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.087 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-19T20:13:16.081702) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.087 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-19T20:13:16.082863) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.087 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-19T20:13:16.084009) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.088 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-19T20:13:16.085370) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.128 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.129 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.129 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fa4f672a300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.129 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.129 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.129 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.129 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.129 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.130 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.130 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fa4f672ab70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.130 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.130 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.130 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.130 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.131 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-19T20:13:16.129637) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.131 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-19T20:13:16.130645) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.174 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.175 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.175 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.176 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.176 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fa4f6728290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.176 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.177 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.177 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.177 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.178 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-19T20:13:16.177520) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.241 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.bytes volume: 18348032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.242 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.242 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.243 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.243 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fa4f69216a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.243 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.243 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.244 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.244 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.244 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/cpu volume: 25830000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.244 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-19T20:13:16.244427) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.245 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.245 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fa4f67286b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.245 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.246 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a390>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.246 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a390>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.246 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.246 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.246 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>]
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.247 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fa4f67283b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.247 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.247 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.248 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.248 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-02-19T20:13:16.246408) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.248 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.249 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.latency volume: 496222391 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.249 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.249 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.latency volume: 2930972 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.250 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.251 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-19T20:13:16.248566) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.251 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fa4f672a120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.251 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.251 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.251 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.251 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.252 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.252 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.253 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-19T20:13:16.251851) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.253 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fa4f672a1b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.253 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.253 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.253 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.253 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.254 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.254 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-19T20:13:16.253875) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.254 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.255 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fa4f6728410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.255 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.255 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.255 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.255 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.256 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.requests volume: 573 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.256 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.256 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-19T20:13:16.255817) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.257 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.257 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.258 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fa4f672a150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.258 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.258 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.258 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.258 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.258 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.259 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-19T20:13:16.258709) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.259 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.259 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fa4f6728470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.259 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.260 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.260 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.260 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-19T20:13:16.260475) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.260 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.260 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.261 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.261 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.262 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.262 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fa4f68f6030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.262 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.262 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.263 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.263 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.263 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-19T20:13:16.263283) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.263 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.264 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.264 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.265 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.265 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fa4f672ab10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.265 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.265 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.265 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.266 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-19T20:13:16.266059) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.266 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.266 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.266 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.267 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.267 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.268 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fa4f6728500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.268 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.268 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.268 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.269 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.269 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.269 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-19T20:13:16.268933) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.269 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.270 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.270 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.271 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fa4f672a0c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.271 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.271 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.271 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.272 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.272 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.272 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-19T20:13:16.272076) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.273 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.273 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fa4f6728560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.273 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.273 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.273 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.274 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.274 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.274 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.275 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.276 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.276 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fa4f67285c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.276 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.276 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.277 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.277 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.277 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.278 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fa4f6728620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.278 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.279 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.279 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-19T20:13:16.274071) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.279 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.279 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-19T20:13:16.277337) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.279 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.280 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.280 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fa4f672be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.280 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-19T20:13:16.279524) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.280 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.280 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.281 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.281 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.281 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.281 15 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 5aaac42d-946d-4c6f-9bde-23b8b6613b59: ceilometer.compute.pollsters.NoVolumeException
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.281 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.282 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fa4f672be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.282 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.282 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.282 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.282 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.282 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.283 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.283 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-19T20:13:16.281332) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.283 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.283 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-19T20:13:16.282720) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.283 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.284 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.284 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.284 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.284 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.284 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.284 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.284 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.284 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.284 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.285 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.285 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.285 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.285 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.285 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.285 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.285 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.285 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.285 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.285 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.286 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.286 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.286 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.286 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:13:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:13:16.286 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:13:16 compute-0 podman[242427]: 2026-02-19 20:13:16.368095602 +0000 UTC m=+0.060535671 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, container_name=kepler, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, io.buildah.version=1.29.0, name=ubi9, managed_by=edpm_ansible, release=1214.1726694543, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public)
Feb 19 20:13:16 compute-0 podman[242428]: 2026-02-19 20:13:16.396182923 +0000 UTC m=+0.084692298 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:13:16 compute-0 nova_compute[188777]: 2026-02-19 20:13:16.528 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:13:17 compute-0 nova_compute[188777]: 2026-02-19 20:13:17.613 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:13:20 compute-0 podman[242467]: 2026-02-19 20:13:20.415669743 +0000 UTC m=+0.091160192 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 19 20:13:21 compute-0 ovn_controller[98843]: 2026-02-19T20:13:21Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e4:9e:14 192.168.0.193
Feb 19 20:13:21 compute-0 ovn_controller[98843]: 2026-02-19T20:13:21Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e4:9e:14 192.168.0.193
Feb 19 20:13:21 compute-0 nova_compute[188777]: 2026-02-19 20:13:21.531 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:13:22 compute-0 nova_compute[188777]: 2026-02-19 20:13:22.615 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:13:23 compute-0 podman[242498]: 2026-02-19 20:13:23.202292363 +0000 UTC m=+0.133855567 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, tcib_managed=true)
Feb 19 20:13:26 compute-0 nova_compute[188777]: 2026-02-19 20:13:26.537 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:13:27 compute-0 podman[242519]: 2026-02-19 20:13:27.469390027 +0000 UTC m=+0.150291239 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 19 20:13:27 compute-0 nova_compute[188777]: 2026-02-19 20:13:27.619 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:13:29 compute-0 podman[204724]: time="2026-02-19T20:13:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:13:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:13:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 19 20:13:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:13:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4345 "" "Go-http-client/1.1"
Feb 19 20:13:30 compute-0 nova_compute[188777]: 2026-02-19 20:13:30.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:13:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:13:30.424 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:13:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:13:30.426 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:13:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:13:30.427 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:13:31 compute-0 openstack_network_exporter[207898]: ERROR   20:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:13:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:13:31 compute-0 openstack_network_exporter[207898]: ERROR   20:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:13:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:13:31 compute-0 nova_compute[188777]: 2026-02-19 20:13:31.542 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:13:32 compute-0 nova_compute[188777]: 2026-02-19 20:13:32.260 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:13:32 compute-0 nova_compute[188777]: 2026-02-19 20:13:32.262 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:13:32 compute-0 nova_compute[188777]: 2026-02-19 20:13:32.262 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:13:32 compute-0 nova_compute[188777]: 2026-02-19 20:13:32.263 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:13:32 compute-0 nova_compute[188777]: 2026-02-19 20:13:32.621 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:13:33 compute-0 nova_compute[188777]: 2026-02-19 20:13:33.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:13:33 compute-0 nova_compute[188777]: 2026-02-19 20:13:33.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:13:33 compute-0 nova_compute[188777]: 2026-02-19 20:13:33.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:13:33 compute-0 nova_compute[188777]: 2026-02-19 20:13:33.297 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:13:33 compute-0 nova_compute[188777]: 2026-02-19 20:13:33.298 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:13:33 compute-0 nova_compute[188777]: 2026-02-19 20:13:33.298 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:13:33 compute-0 nova_compute[188777]: 2026-02-19 20:13:33.298 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:13:33 compute-0 nova_compute[188777]: 2026-02-19 20:13:33.383 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:13:33 compute-0 nova_compute[188777]: 2026-02-19 20:13:33.453 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:13:33 compute-0 nova_compute[188777]: 2026-02-19 20:13:33.456 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:13:33 compute-0 nova_compute[188777]: 2026-02-19 20:13:33.544 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:13:33 compute-0 nova_compute[188777]: 2026-02-19 20:13:33.546 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:13:33 compute-0 nova_compute[188777]: 2026-02-19 20:13:33.594 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:13:33 compute-0 nova_compute[188777]: 2026-02-19 20:13:33.596 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:13:33 compute-0 nova_compute[188777]: 2026-02-19 20:13:33.679 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:13:34 compute-0 nova_compute[188777]: 2026-02-19 20:13:34.251 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:13:34 compute-0 nova_compute[188777]: 2026-02-19 20:13:34.253 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5236MB free_disk=72.24847412109375GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:13:34 compute-0 nova_compute[188777]: 2026-02-19 20:13:34.253 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:13:34 compute-0 nova_compute[188777]: 2026-02-19 20:13:34.254 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:13:34 compute-0 nova_compute[188777]: 2026-02-19 20:13:34.326 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 5aaac42d-946d-4c6f-9bde-23b8b6613b59 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:13:34 compute-0 nova_compute[188777]: 2026-02-19 20:13:34.327 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:13:34 compute-0 nova_compute[188777]: 2026-02-19 20:13:34.327 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:13:34 compute-0 nova_compute[188777]: 2026-02-19 20:13:34.365 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Updating inventory in ProviderTree for provider c266959e-952e-41ad-bc2e-56513f39ec2d with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 19 20:13:34 compute-0 nova_compute[188777]: 2026-02-19 20:13:34.390 188781 ERROR nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [req-19ac4329-bbcf-4f82-b25e-09360418a648] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID c266959e-952e-41ad-bc2e-56513f39ec2d.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-19ac4329-bbcf-4f82-b25e-09360418a648"}]}
Feb 19 20:13:34 compute-0 nova_compute[188777]: 2026-02-19 20:13:34.407 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Refreshing inventories for resource provider c266959e-952e-41ad-bc2e-56513f39ec2d _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 19 20:13:34 compute-0 nova_compute[188777]: 2026-02-19 20:13:34.421 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Updating ProviderTree inventory for provider c266959e-952e-41ad-bc2e-56513f39ec2d from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 19 20:13:34 compute-0 nova_compute[188777]: 2026-02-19 20:13:34.421 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Updating inventory in ProviderTree for provider c266959e-952e-41ad-bc2e-56513f39ec2d with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 19 20:13:34 compute-0 nova_compute[188777]: 2026-02-19 20:13:34.433 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Refreshing aggregate associations for resource provider c266959e-952e-41ad-bc2e-56513f39ec2d, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 19 20:13:34 compute-0 nova_compute[188777]: 2026-02-19 20:13:34.457 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Refreshing trait associations for resource provider c266959e-952e-41ad-bc2e-56513f39ec2d, traits: HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,HW_CPU_X86_SHA,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NODE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,HW_CPU_X86_F16C,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_ABM,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_MMX,HW_CPU_X86_BMI2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 19 20:13:34 compute-0 nova_compute[188777]: 2026-02-19 20:13:34.496 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Updating inventory in ProviderTree for provider c266959e-952e-41ad-bc2e-56513f39ec2d with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 19 20:13:34 compute-0 nova_compute[188777]: 2026-02-19 20:13:34.539 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Updated inventory for provider c266959e-952e-41ad-bc2e-56513f39ec2d with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Feb 19 20:13:34 compute-0 nova_compute[188777]: 2026-02-19 20:13:34.539 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Updating resource provider c266959e-952e-41ad-bc2e-56513f39ec2d generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Feb 19 20:13:34 compute-0 nova_compute[188777]: 2026-02-19 20:13:34.540 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Updating inventory in ProviderTree for provider c266959e-952e-41ad-bc2e-56513f39ec2d with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 19 20:13:34 compute-0 nova_compute[188777]: 2026-02-19 20:13:34.564 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:13:34 compute-0 nova_compute[188777]: 2026-02-19 20:13:34.564 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.311s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:13:35 compute-0 nova_compute[188777]: 2026-02-19 20:13:35.564 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:13:35 compute-0 nova_compute[188777]: 2026-02-19 20:13:35.565 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:13:35 compute-0 nova_compute[188777]: 2026-02-19 20:13:35.565 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 19 20:13:35 compute-0 nova_compute[188777]: 2026-02-19 20:13:35.949 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "refresh_cache-5aaac42d-946d-4c6f-9bde-23b8b6613b59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:13:35 compute-0 nova_compute[188777]: 2026-02-19 20:13:35.950 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquired lock "refresh_cache-5aaac42d-946d-4c6f-9bde-23b8b6613b59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:13:35 compute-0 nova_compute[188777]: 2026-02-19 20:13:35.950 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 19 20:13:35 compute-0 nova_compute[188777]: 2026-02-19 20:13:35.950 188781 DEBUG nova.objects.instance [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5aaac42d-946d-4c6f-9bde-23b8b6613b59 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:13:36 compute-0 nova_compute[188777]: 2026-02-19 20:13:36.547 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:13:36 compute-0 nova_compute[188777]: 2026-02-19 20:13:36.959 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Updating instance_info_cache with network_info: [{"id": "10027d6c-43cc-4a7c-be42-a49c8c914f25", "address": "fa:16:3e:e4:9e:14", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.193", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10027d6c-43", "ovs_interfaceid": "10027d6c-43cc-4a7c-be42-a49c8c914f25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:13:36 compute-0 nova_compute[188777]: 2026-02-19 20:13:36.976 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Releasing lock "refresh_cache-5aaac42d-946d-4c6f-9bde-23b8b6613b59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:13:36 compute-0 nova_compute[188777]: 2026-02-19 20:13:36.976 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 19 20:13:36 compute-0 nova_compute[188777]: 2026-02-19 20:13:36.977 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:13:37 compute-0 ovn_controller[98843]: 2026-02-19T20:13:37Z|00034|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Feb 19 20:13:37 compute-0 nova_compute[188777]: 2026-02-19 20:13:37.623 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:13:39 compute-0 podman[242561]: 2026-02-19 20:13:39.393878369 +0000 UTC m=+0.072362456 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 19 20:13:39 compute-0 podman[242560]: 2026-02-19 20:13:39.400106127 +0000 UTC m=+0.080222355 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, release=1770267347, version=9.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Feb 19 20:13:41 compute-0 nova_compute[188777]: 2026-02-19 20:13:41.552 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:13:42 compute-0 nova_compute[188777]: 2026-02-19 20:13:42.626 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:13:43 compute-0 sshd-session[242558]: Received disconnect from 103.103.245.7 port 59102:11: Bye Bye [preauth]
Feb 19 20:13:43 compute-0 sshd-session[242558]: Disconnected from authenticating user root 103.103.245.7 port 59102 [preauth]
Feb 19 20:13:44 compute-0 podman[242605]: 2026-02-19 20:13:44.374211324 +0000 UTC m=+0.061193922 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Feb 19 20:13:46 compute-0 nova_compute[188777]: 2026-02-19 20:13:46.556 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:13:47 compute-0 podman[242626]: 2026-02-19 20:13:47.432980275 +0000 UTC m=+0.109153333 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, version=9.4, config_id=kepler, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, distribution-scope=public, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, io.openshift.tags=base rhel9, managed_by=edpm_ansible, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, release=1214.1726694543, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Feb 19 20:13:47 compute-0 podman[242627]: 2026-02-19 20:13:47.445664107 +0000 UTC m=+0.116715033 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 19 20:13:47 compute-0 nova_compute[188777]: 2026-02-19 20:13:47.628 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:13:51 compute-0 podman[242666]: 2026-02-19 20:13:51.423858497 +0000 UTC m=+0.106146977 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 19 20:13:51 compute-0 nova_compute[188777]: 2026-02-19 20:13:51.559 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:13:52 compute-0 nova_compute[188777]: 2026-02-19 20:13:52.608 188781 DEBUG oslo_concurrency.lockutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "0975826c-6016-48c8-a7dd-1b10a32f91ba" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:13:52 compute-0 nova_compute[188777]: 2026-02-19 20:13:52.610 188781 DEBUG oslo_concurrency.lockutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "0975826c-6016-48c8-a7dd-1b10a32f91ba" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:13:52 compute-0 nova_compute[188777]: 2026-02-19 20:13:52.626 188781 DEBUG nova.compute.manager [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 19 20:13:52 compute-0 nova_compute[188777]: 2026-02-19 20:13:52.630 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:13:52 compute-0 nova_compute[188777]: 2026-02-19 20:13:52.714 188781 DEBUG oslo_concurrency.lockutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:13:52 compute-0 nova_compute[188777]: 2026-02-19 20:13:52.715 188781 DEBUG oslo_concurrency.lockutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:13:52 compute-0 nova_compute[188777]: 2026-02-19 20:13:52.726 188781 DEBUG nova.virt.hardware [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 19 20:13:52 compute-0 nova_compute[188777]: 2026-02-19 20:13:52.727 188781 INFO nova.compute.claims [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Claim successful on node compute-0.ctlplane.example.com
Feb 19 20:13:52 compute-0 nova_compute[188777]: 2026-02-19 20:13:52.855 188781 DEBUG nova.compute.provider_tree [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:13:52 compute-0 nova_compute[188777]: 2026-02-19 20:13:52.873 188781 DEBUG nova.scheduler.client.report [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:13:52 compute-0 nova_compute[188777]: 2026-02-19 20:13:52.903 188781 DEBUG oslo_concurrency.lockutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.188s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:13:52 compute-0 nova_compute[188777]: 2026-02-19 20:13:52.904 188781 DEBUG nova.compute.manager [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 19 20:13:52 compute-0 nova_compute[188777]: 2026-02-19 20:13:52.961 188781 DEBUG nova.compute.manager [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 19 20:13:52 compute-0 nova_compute[188777]: 2026-02-19 20:13:52.962 188781 DEBUG nova.network.neutron [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 19 20:13:52 compute-0 nova_compute[188777]: 2026-02-19 20:13:52.984 188781 INFO nova.virt.libvirt.driver [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 19 20:13:53 compute-0 nova_compute[188777]: 2026-02-19 20:13:53.017 188781 DEBUG nova.compute.manager [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 19 20:13:53 compute-0 nova_compute[188777]: 2026-02-19 20:13:53.101 188781 DEBUG nova.compute.manager [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 19 20:13:53 compute-0 nova_compute[188777]: 2026-02-19 20:13:53.102 188781 DEBUG nova.virt.libvirt.driver [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 19 20:13:53 compute-0 nova_compute[188777]: 2026-02-19 20:13:53.103 188781 INFO nova.virt.libvirt.driver [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Creating image(s)
Feb 19 20:13:53 compute-0 nova_compute[188777]: 2026-02-19 20:13:53.103 188781 DEBUG oslo_concurrency.lockutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "/var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:13:53 compute-0 nova_compute[188777]: 2026-02-19 20:13:53.104 188781 DEBUG oslo_concurrency.lockutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "/var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:13:53 compute-0 nova_compute[188777]: 2026-02-19 20:13:53.104 188781 DEBUG oslo_concurrency.lockutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "/var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:13:53 compute-0 nova_compute[188777]: 2026-02-19 20:13:53.116 188781 DEBUG oslo_concurrency.processutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:13:53 compute-0 nova_compute[188777]: 2026-02-19 20:13:53.160 188781 DEBUG oslo_concurrency.processutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a --force-share --output=json" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:13:53 compute-0 nova_compute[188777]: 2026-02-19 20:13:53.162 188781 DEBUG oslo_concurrency.lockutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "ab3f72be2a6a58a25574f1d71543e651d74a575a" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:13:53 compute-0 nova_compute[188777]: 2026-02-19 20:13:53.162 188781 DEBUG oslo_concurrency.lockutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "ab3f72be2a6a58a25574f1d71543e651d74a575a" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:13:53 compute-0 nova_compute[188777]: 2026-02-19 20:13:53.178 188781 DEBUG oslo_concurrency.processutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:13:53 compute-0 nova_compute[188777]: 2026-02-19 20:13:53.236 188781 DEBUG oslo_concurrency.processutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:13:53 compute-0 nova_compute[188777]: 2026-02-19 20:13:53.237 188781 DEBUG oslo_concurrency.processutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a,backing_fmt=raw /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:13:53 compute-0 nova_compute[188777]: 2026-02-19 20:13:53.277 188781 DEBUG oslo_concurrency.processutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a,backing_fmt=raw /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk 1073741824" returned: 0 in 0.040s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:13:53 compute-0 nova_compute[188777]: 2026-02-19 20:13:53.278 188781 DEBUG oslo_concurrency.lockutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "ab3f72be2a6a58a25574f1d71543e651d74a575a" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:13:53 compute-0 nova_compute[188777]: 2026-02-19 20:13:53.279 188781 DEBUG oslo_concurrency.processutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:13:53 compute-0 nova_compute[188777]: 2026-02-19 20:13:53.334 188781 DEBUG oslo_concurrency.processutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:13:53 compute-0 nova_compute[188777]: 2026-02-19 20:13:53.335 188781 DEBUG nova.virt.disk.api [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Checking if we can resize image /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Feb 19 20:13:53 compute-0 nova_compute[188777]: 2026-02-19 20:13:53.335 188781 DEBUG oslo_concurrency.processutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:13:53 compute-0 podman[242699]: 2026-02-19 20:13:53.376790636 +0000 UTC m=+0.065090466 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 19 20:13:53 compute-0 nova_compute[188777]: 2026-02-19 20:13:53.384 188781 DEBUG oslo_concurrency.processutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:13:53 compute-0 nova_compute[188777]: 2026-02-19 20:13:53.385 188781 DEBUG nova.virt.disk.api [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Cannot resize image /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Feb 19 20:13:53 compute-0 nova_compute[188777]: 2026-02-19 20:13:53.386 188781 DEBUG nova.objects.instance [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lazy-loading 'migration_context' on Instance uuid 0975826c-6016-48c8-a7dd-1b10a32f91ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:13:53 compute-0 nova_compute[188777]: 2026-02-19 20:13:53.402 188781 DEBUG oslo_concurrency.lockutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "/var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:13:53 compute-0 nova_compute[188777]: 2026-02-19 20:13:53.403 188781 DEBUG oslo_concurrency.lockutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "/var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:13:53 compute-0 nova_compute[188777]: 2026-02-19 20:13:53.404 188781 DEBUG oslo_concurrency.lockutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "/var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:13:53 compute-0 nova_compute[188777]: 2026-02-19 20:13:53.432 188781 DEBUG oslo_concurrency.processutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:13:53 compute-0 nova_compute[188777]: 2026-02-19 20:13:53.482 188781 DEBUG oslo_concurrency.processutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:13:53 compute-0 nova_compute[188777]: 2026-02-19 20:13:53.483 188781 DEBUG oslo_concurrency.lockutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:13:53 compute-0 nova_compute[188777]: 2026-02-19 20:13:53.483 188781 DEBUG oslo_concurrency.lockutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:13:53 compute-0 nova_compute[188777]: 2026-02-19 20:13:53.494 188781 DEBUG oslo_concurrency.processutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:13:53 compute-0 nova_compute[188777]: 2026-02-19 20:13:53.540 188781 DEBUG oslo_concurrency.processutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:13:53 compute-0 nova_compute[188777]: 2026-02-19 20:13:53.541 188781 DEBUG oslo_concurrency.processutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:13:53 compute-0 nova_compute[188777]: 2026-02-19 20:13:53.576 188781 DEBUG oslo_concurrency.processutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0 1073741824" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:13:53 compute-0 nova_compute[188777]: 2026-02-19 20:13:53.577 188781 DEBUG oslo_concurrency.lockutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:13:53 compute-0 nova_compute[188777]: 2026-02-19 20:13:53.578 188781 DEBUG oslo_concurrency.processutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:13:53 compute-0 nova_compute[188777]: 2026-02-19 20:13:53.640 188781 DEBUG oslo_concurrency.processutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:13:53 compute-0 nova_compute[188777]: 2026-02-19 20:13:53.642 188781 DEBUG nova.virt.libvirt.driver [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 19 20:13:53 compute-0 nova_compute[188777]: 2026-02-19 20:13:53.642 188781 DEBUG nova.virt.libvirt.driver [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Ensure instance console log exists: /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 19 20:13:53 compute-0 nova_compute[188777]: 2026-02-19 20:13:53.643 188781 DEBUG oslo_concurrency.lockutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:13:53 compute-0 nova_compute[188777]: 2026-02-19 20:13:53.644 188781 DEBUG oslo_concurrency.lockutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:13:53 compute-0 nova_compute[188777]: 2026-02-19 20:13:53.645 188781 DEBUG oslo_concurrency.lockutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:13:56 compute-0 nova_compute[188777]: 2026-02-19 20:13:56.563 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:13:56 compute-0 nova_compute[188777]: 2026-02-19 20:13:56.629 188781 DEBUG nova.network.neutron [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Successfully updated port: db2ce91f-7740-44a2-bab1-8455e2dfddde _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 19 20:13:56 compute-0 nova_compute[188777]: 2026-02-19 20:13:56.648 188781 DEBUG oslo_concurrency.lockutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "refresh_cache-0975826c-6016-48c8-a7dd-1b10a32f91ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:13:56 compute-0 nova_compute[188777]: 2026-02-19 20:13:56.648 188781 DEBUG oslo_concurrency.lockutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquired lock "refresh_cache-0975826c-6016-48c8-a7dd-1b10a32f91ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:13:56 compute-0 nova_compute[188777]: 2026-02-19 20:13:56.648 188781 DEBUG nova.network.neutron [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 19 20:13:56 compute-0 nova_compute[188777]: 2026-02-19 20:13:56.724 188781 DEBUG nova.compute.manager [req-23499c98-8c4d-47ae-9243-91029d650100 req-fd0c88dd-803c-4344-95f3-e85d6571a966 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Received event network-changed-db2ce91f-7740-44a2-bab1-8455e2dfddde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:13:56 compute-0 nova_compute[188777]: 2026-02-19 20:13:56.725 188781 DEBUG nova.compute.manager [req-23499c98-8c4d-47ae-9243-91029d650100 req-fd0c88dd-803c-4344-95f3-e85d6571a966 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Refreshing instance network info cache due to event network-changed-db2ce91f-7740-44a2-bab1-8455e2dfddde. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 19 20:13:56 compute-0 nova_compute[188777]: 2026-02-19 20:13:56.725 188781 DEBUG oslo_concurrency.lockutils [req-23499c98-8c4d-47ae-9243-91029d650100 req-fd0c88dd-803c-4344-95f3-e85d6571a966 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "refresh_cache-0975826c-6016-48c8-a7dd-1b10a32f91ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:13:56 compute-0 nova_compute[188777]: 2026-02-19 20:13:56.795 188781 DEBUG nova.network.neutron [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.632 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.692 188781 DEBUG nova.network.neutron [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Updating instance_info_cache with network_info: [{"id": "db2ce91f-7740-44a2-bab1-8455e2dfddde", "address": "fa:16:3e:4d:93:1a", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.213", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb2ce91f-77", "ovs_interfaceid": "db2ce91f-7740-44a2-bab1-8455e2dfddde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.722 188781 DEBUG oslo_concurrency.lockutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Releasing lock "refresh_cache-0975826c-6016-48c8-a7dd-1b10a32f91ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.723 188781 DEBUG nova.compute.manager [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Instance network_info: |[{"id": "db2ce91f-7740-44a2-bab1-8455e2dfddde", "address": "fa:16:3e:4d:93:1a", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.213", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb2ce91f-77", "ovs_interfaceid": "db2ce91f-7740-44a2-bab1-8455e2dfddde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.723 188781 DEBUG oslo_concurrency.lockutils [req-23499c98-8c4d-47ae-9243-91029d650100 req-fd0c88dd-803c-4344-95f3-e85d6571a966 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquired lock "refresh_cache-0975826c-6016-48c8-a7dd-1b10a32f91ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.724 188781 DEBUG nova.network.neutron [req-23499c98-8c4d-47ae-9243-91029d650100 req-fd0c88dd-803c-4344-95f3-e85d6571a966 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Refreshing network info cache for port db2ce91f-7740-44a2-bab1-8455e2dfddde _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.727 188781 DEBUG nova.virt.libvirt.driver [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Start _get_guest_xml network_info=[{"id": "db2ce91f-7740-44a2-bab1-8455e2dfddde", "address": "fa:16:3e:4d:93:1a", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.213", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb2ce91f-77", "ovs_interfaceid": "db2ce91f-7740-44a2-bab1-8455e2dfddde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-02-19T20:11:25Z,direct_url=<?>,disk_format='qcow2',id=e1a79c75-2fa3-410d-9c4c-91db3eeca51d,min_disk=0,min_ram=0,name='cirros',owner='59f01dee51a74ac1a9f82733f591827d',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-02-19T20:11:26Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'encryption_secret_uuid': None, 'image_id': 'e1a79c75-2fa3-410d-9c4c-91db3eeca51d'}], 'ephemerals': [{'device_name': '/dev/vdb', 'guest_format': None, 'size': 1, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'encryption_options': None, 'encryption_secret_uuid': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.733 188781 WARNING nova.virt.libvirt.driver [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.739 188781 DEBUG nova.virt.libvirt.host [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.739 188781 DEBUG nova.virt.libvirt.host [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.743 188781 DEBUG nova.virt.libvirt.host [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.743 188781 DEBUG nova.virt.libvirt.host [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.744 188781 DEBUG nova.virt.libvirt.driver [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.744 188781 DEBUG nova.virt.hardware [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-19T20:11:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='8030bc1a-9afb-4678-ac07-8b59a1275925',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-02-19T20:11:25Z,direct_url=<?>,disk_format='qcow2',id=e1a79c75-2fa3-410d-9c4c-91db3eeca51d,min_disk=0,min_ram=0,name='cirros',owner='59f01dee51a74ac1a9f82733f591827d',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-02-19T20:11:26Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.744 188781 DEBUG nova.virt.hardware [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.745 188781 DEBUG nova.virt.hardware [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.745 188781 DEBUG nova.virt.hardware [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.745 188781 DEBUG nova.virt.hardware [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.746 188781 DEBUG nova.virt.hardware [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.746 188781 DEBUG nova.virt.hardware [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.746 188781 DEBUG nova.virt.hardware [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.747 188781 DEBUG nova.virt.hardware [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.747 188781 DEBUG nova.virt.hardware [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.747 188781 DEBUG nova.virt.hardware [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.751 188781 DEBUG nova.virt.libvirt.vif [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-19T20:13:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-h4amqsx-kmyzbqhhqloy-unhgieiyt6e3-vnf-p7rghgh5js3a',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-h4amqsx-kmyzbqhhqloy-unhgieiyt6e3-vnf-p7rghgh5js3a',id=2,image_ref='e1a79c75-2fa3-410d-9c4c-91db3eeca51d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='78adc0ea-8772-4283-8bd6-6dbdcecee09e'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='59f01dee51a74ac1a9f82733f591827d',ramdisk_id='',reservation_id='r-cv4t54vg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='e1a79c75-2fa3-410d-9c4c-91db3eeca51d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-19T20:13:53Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0xMDE3NjI1MzM3ODk5ODk3NDg3PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTEwMTc2MjUzMzc4OTk4OTc0ODc9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MTAxNzYyNTMzNzg5OTg5NzQ4Nz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTEwMTc2MjUzMzc4OTk4OTc0ODc9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0xMDE3NjI1MzM3ODk5ODk3NDg3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0xMDE3NjI1MzM3ODk5ODk3NDg3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Feb 19 20:13:57 compute-0 nova_compute[188777]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MTAxNzYyNTMzNzg5OTg5NzQ4Nz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTEwMTc2MjUzMzc4OTk4OTc0ODc9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0xMDE3NjI1MzM3ODk5ODk3NDg3PT0tLQo=',user_id='9f5597a45dc34ee19bcfe938afde768f',uuid=0975826c-6016-48c8-a7dd-1b10a32f91ba,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "db2ce91f-7740-44a2-bab1-8455e2dfddde", "address": "fa:16:3e:4d:93:1a", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.213", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb2ce91f-77", "ovs_interfaceid": "db2ce91f-7740-44a2-bab1-8455e2dfddde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.752 188781 DEBUG nova.network.os_vif_util [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Converting VIF {"id": "db2ce91f-7740-44a2-bab1-8455e2dfddde", "address": "fa:16:3e:4d:93:1a", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.213", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb2ce91f-77", "ovs_interfaceid": "db2ce91f-7740-44a2-bab1-8455e2dfddde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.752 188781 DEBUG nova.network.os_vif_util [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:93:1a,bridge_name='br-int',has_traffic_filtering=True,id=db2ce91f-7740-44a2-bab1-8455e2dfddde,network=Network(ec82c3b7-5389-43ab-a939-ce6cd12f9681),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdb2ce91f-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.753 188781 DEBUG nova.objects.instance [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lazy-loading 'pci_devices' on Instance uuid 0975826c-6016-48c8-a7dd-1b10a32f91ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.766 188781 DEBUG nova.virt.libvirt.driver [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] End _get_guest_xml xml=<domain type="kvm">
Feb 19 20:13:57 compute-0 nova_compute[188777]:   <uuid>0975826c-6016-48c8-a7dd-1b10a32f91ba</uuid>
Feb 19 20:13:57 compute-0 nova_compute[188777]:   <name>instance-00000002</name>
Feb 19 20:13:57 compute-0 nova_compute[188777]:   <memory>524288</memory>
Feb 19 20:13:57 compute-0 nova_compute[188777]:   <vcpu>1</vcpu>
Feb 19 20:13:57 compute-0 nova_compute[188777]:   <metadata>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 19 20:13:57 compute-0 nova_compute[188777]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:       <nova:name>vn-h4amqsx-kmyzbqhhqloy-unhgieiyt6e3-vnf-p7rghgh5js3a</nova:name>
Feb 19 20:13:57 compute-0 nova_compute[188777]:       <nova:creationTime>2026-02-19 20:13:57</nova:creationTime>
Feb 19 20:13:57 compute-0 nova_compute[188777]:       <nova:flavor name="m1.small">
Feb 19 20:13:57 compute-0 nova_compute[188777]:         <nova:memory>512</nova:memory>
Feb 19 20:13:57 compute-0 nova_compute[188777]:         <nova:disk>1</nova:disk>
Feb 19 20:13:57 compute-0 nova_compute[188777]:         <nova:swap>0</nova:swap>
Feb 19 20:13:57 compute-0 nova_compute[188777]:         <nova:ephemeral>1</nova:ephemeral>
Feb 19 20:13:57 compute-0 nova_compute[188777]:         <nova:vcpus>1</nova:vcpus>
Feb 19 20:13:57 compute-0 nova_compute[188777]:       </nova:flavor>
Feb 19 20:13:57 compute-0 nova_compute[188777]:       <nova:owner>
Feb 19 20:13:57 compute-0 nova_compute[188777]:         <nova:user uuid="9f5597a45dc34ee19bcfe938afde768f">admin</nova:user>
Feb 19 20:13:57 compute-0 nova_compute[188777]:         <nova:project uuid="59f01dee51a74ac1a9f82733f591827d">admin</nova:project>
Feb 19 20:13:57 compute-0 nova_compute[188777]:       </nova:owner>
Feb 19 20:13:57 compute-0 nova_compute[188777]:       <nova:root type="image" uuid="e1a79c75-2fa3-410d-9c4c-91db3eeca51d"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:       <nova:ports>
Feb 19 20:13:57 compute-0 nova_compute[188777]:         <nova:port uuid="db2ce91f-7740-44a2-bab1-8455e2dfddde">
Feb 19 20:13:57 compute-0 nova_compute[188777]:           <nova:ip type="fixed" address="192.168.0.213" ipVersion="4"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:         </nova:port>
Feb 19 20:13:57 compute-0 nova_compute[188777]:       </nova:ports>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     </nova:instance>
Feb 19 20:13:57 compute-0 nova_compute[188777]:   </metadata>
Feb 19 20:13:57 compute-0 nova_compute[188777]:   <sysinfo type="smbios">
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <system>
Feb 19 20:13:57 compute-0 nova_compute[188777]:       <entry name="manufacturer">RDO</entry>
Feb 19 20:13:57 compute-0 nova_compute[188777]:       <entry name="product">OpenStack Compute</entry>
Feb 19 20:13:57 compute-0 nova_compute[188777]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 19 20:13:57 compute-0 nova_compute[188777]:       <entry name="serial">0975826c-6016-48c8-a7dd-1b10a32f91ba</entry>
Feb 19 20:13:57 compute-0 nova_compute[188777]:       <entry name="uuid">0975826c-6016-48c8-a7dd-1b10a32f91ba</entry>
Feb 19 20:13:57 compute-0 nova_compute[188777]:       <entry name="family">Virtual Machine</entry>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     </system>
Feb 19 20:13:57 compute-0 nova_compute[188777]:   </sysinfo>
Feb 19 20:13:57 compute-0 nova_compute[188777]:   <os>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <boot dev="hd"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <smbios mode="sysinfo"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:   </os>
Feb 19 20:13:57 compute-0 nova_compute[188777]:   <features>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <acpi/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <apic/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <vmcoreinfo/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:   </features>
Feb 19 20:13:57 compute-0 nova_compute[188777]:   <clock offset="utc">
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <timer name="pit" tickpolicy="delay"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <timer name="hpet" present="no"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:   </clock>
Feb 19 20:13:57 compute-0 nova_compute[188777]:   <cpu mode="host-model" match="exact">
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <topology sockets="1" cores="1" threads="1"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:   </cpu>
Feb 19 20:13:57 compute-0 nova_compute[188777]:   <devices>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <disk type="file" device="disk">
Feb 19 20:13:57 compute-0 nova_compute[188777]:       <driver name="qemu" type="qcow2" cache="none"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:       <source file="/var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:       <target dev="vda" bus="virtio"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     </disk>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <disk type="file" device="disk">
Feb 19 20:13:57 compute-0 nova_compute[188777]:       <driver name="qemu" type="qcow2" cache="none"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:       <source file="/var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:       <target dev="vdb" bus="virtio"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     </disk>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <disk type="file" device="cdrom">
Feb 19 20:13:57 compute-0 nova_compute[188777]:       <driver name="qemu" type="raw" cache="none"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:       <source file="/var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.config"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:       <target dev="sda" bus="sata"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     </disk>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <interface type="ethernet">
Feb 19 20:13:57 compute-0 nova_compute[188777]:       <mac address="fa:16:3e:4d:93:1a"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:       <model type="virtio"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:       <driver name="vhost" rx_queue_size="512"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:       <mtu size="1442"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:       <target dev="tapdb2ce91f-77"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     </interface>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <serial type="pty">
Feb 19 20:13:57 compute-0 nova_compute[188777]:       <log file="/var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/console.log" append="off"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     </serial>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <video>
Feb 19 20:13:57 compute-0 nova_compute[188777]:       <model type="virtio"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     </video>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <input type="tablet" bus="usb"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <rng model="virtio">
Feb 19 20:13:57 compute-0 nova_compute[188777]:       <backend model="random">/dev/urandom</backend>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     </rng>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <controller type="usb" index="0"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     <memballoon model="virtio">
Feb 19 20:13:57 compute-0 nova_compute[188777]:       <stats period="10"/>
Feb 19 20:13:57 compute-0 nova_compute[188777]:     </memballoon>
Feb 19 20:13:57 compute-0 nova_compute[188777]:   </devices>
Feb 19 20:13:57 compute-0 nova_compute[188777]: </domain>
Feb 19 20:13:57 compute-0 nova_compute[188777]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.767 188781 DEBUG nova.compute.manager [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Preparing to wait for external event network-vif-plugged-db2ce91f-7740-44a2-bab1-8455e2dfddde prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.767 188781 DEBUG oslo_concurrency.lockutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "0975826c-6016-48c8-a7dd-1b10a32f91ba-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.768 188781 DEBUG oslo_concurrency.lockutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "0975826c-6016-48c8-a7dd-1b10a32f91ba-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.768 188781 DEBUG oslo_concurrency.lockutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "0975826c-6016-48c8-a7dd-1b10a32f91ba-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.769 188781 DEBUG nova.virt.libvirt.vif [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-19T20:13:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-h4amqsx-kmyzbqhhqloy-unhgieiyt6e3-vnf-p7rghgh5js3a',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-h4amqsx-kmyzbqhhqloy-unhgieiyt6e3-vnf-p7rghgh5js3a',id=2,image_ref='e1a79c75-2fa3-410d-9c4c-91db3eeca51d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='78adc0ea-8772-4283-8bd6-6dbdcecee09e'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='59f01dee51a74ac1a9f82733f591827d',ramdisk_id='',reservation_id='r-cv4t54vg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='e1a79c75-2fa3-410d-9c4c-91db3eeca51d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-19T20:13:53Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0xMDE3NjI1MzM3ODk5ODk3NDg3PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTEwMTc2MjUzMzc4OTk4OTc0ODc9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MTAxNzYyNTMzNzg5OTg5NzQ4Nz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTEwMTc2MjUzMzc4OTk4OTc0ODc9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0xMDE3NjI1MzM3ODk5ODk3NDg3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0xMDE3NjI1MzM3ODk5ODk3NDg3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Feb 19 20:13:57 compute-0 nova_compute[188777]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MTAxNzYyNTMzNzg5OTg5NzQ4Nz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTEwMTc2MjUzMzc4OTk4OTc0ODc9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0xMDE3NjI1MzM3ODk5ODk3NDg3PT0tLQo=',user_id='9f5597a45dc34ee19bcfe938afde768f',uuid=0975826c-6016-48c8-a7dd-1b10a32f91ba,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "db2ce91f-7740-44a2-bab1-8455e2dfddde", "address": "fa:16:3e:4d:93:1a", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.213", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb2ce91f-77", "ovs_interfaceid": "db2ce91f-7740-44a2-bab1-8455e2dfddde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.769 188781 DEBUG nova.network.os_vif_util [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Converting VIF {"id": "db2ce91f-7740-44a2-bab1-8455e2dfddde", "address": "fa:16:3e:4d:93:1a", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.213", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb2ce91f-77", "ovs_interfaceid": "db2ce91f-7740-44a2-bab1-8455e2dfddde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.770 188781 DEBUG nova.network.os_vif_util [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:93:1a,bridge_name='br-int',has_traffic_filtering=True,id=db2ce91f-7740-44a2-bab1-8455e2dfddde,network=Network(ec82c3b7-5389-43ab-a939-ce6cd12f9681),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdb2ce91f-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.770 188781 DEBUG os_vif [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:93:1a,bridge_name='br-int',has_traffic_filtering=True,id=db2ce91f-7740-44a2-bab1-8455e2dfddde,network=Network(ec82c3b7-5389-43ab-a939-ce6cd12f9681),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdb2ce91f-77') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.771 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.771 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.771 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.775 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.775 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdb2ce91f-77, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.776 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdb2ce91f-77, col_values=(('external_ids', {'iface-id': 'db2ce91f-7740-44a2-bab1-8455e2dfddde', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4d:93:1a', 'vm-uuid': '0975826c-6016-48c8-a7dd-1b10a32f91ba'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.777 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:13:57 compute-0 NetworkManager[57033]: <info>  [1771532037.7783] manager: (tapdb2ce91f-77): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.780 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.788 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.790 188781 INFO os_vif [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:93:1a,bridge_name='br-int',has_traffic_filtering=True,id=db2ce91f-7740-44a2-bab1-8455e2dfddde,network=Network(ec82c3b7-5389-43ab-a939-ce6cd12f9681),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdb2ce91f-77')
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.847 188781 DEBUG nova.virt.libvirt.driver [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.848 188781 DEBUG nova.virt.libvirt.driver [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.848 188781 DEBUG nova.virt.libvirt.driver [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.849 188781 DEBUG nova.virt.libvirt.driver [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] No VIF found with MAC fa:16:3e:4d:93:1a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 19 20:13:57 compute-0 nova_compute[188777]: 2026-02-19 20:13:57.850 188781 INFO nova.virt.libvirt.driver [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Using config drive
Feb 19 20:13:57 compute-0 rsyslogd[239379]: message too long (8192) with configured size 8096, begin of message is: 2026-02-19 20:13:57.751 188781 DEBUG nova.virt.libvirt.vif [None req-c044a091-17 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 19 20:13:58 compute-0 rsyslogd[239379]: message too long (8192) with configured size 8096, begin of message is: 2026-02-19 20:13:57.769 188781 DEBUG nova.virt.libvirt.vif [None req-c044a091-17 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 19 20:13:58 compute-0 nova_compute[188777]: 2026-02-19 20:13:58.228 188781 INFO nova.virt.libvirt.driver [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Creating config drive at /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.config
Feb 19 20:13:58 compute-0 nova_compute[188777]: 2026-02-19 20:13:58.239 188781 DEBUG oslo_concurrency.processutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpcfmrwnna execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:13:58 compute-0 nova_compute[188777]: 2026-02-19 20:13:58.358 188781 DEBUG oslo_concurrency.processutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpcfmrwnna" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:13:58 compute-0 podman[242740]: 2026-02-19 20:13:58.406551448 +0000 UTC m=+0.088985753 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller)
Feb 19 20:13:58 compute-0 kernel: tapdb2ce91f-77: entered promiscuous mode
Feb 19 20:13:58 compute-0 NetworkManager[57033]: <info>  [1771532038.4151] manager: (tapdb2ce91f-77): new Tun device (/org/freedesktop/NetworkManager/Devices/28)
Feb 19 20:13:58 compute-0 ovn_controller[98843]: 2026-02-19T20:13:58Z|00035|binding|INFO|Claiming lport db2ce91f-7740-44a2-bab1-8455e2dfddde for this chassis.
Feb 19 20:13:58 compute-0 ovn_controller[98843]: 2026-02-19T20:13:58Z|00036|binding|INFO|db2ce91f-7740-44a2-bab1-8455e2dfddde: Claiming fa:16:3e:4d:93:1a 192.168.0.213
Feb 19 20:13:58 compute-0 nova_compute[188777]: 2026-02-19 20:13:58.418 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:13:58 compute-0 ovn_controller[98843]: 2026-02-19T20:13:58Z|00037|binding|INFO|Setting lport db2ce91f-7740-44a2-bab1-8455e2dfddde ovn-installed in OVS
Feb 19 20:13:58 compute-0 nova_compute[188777]: 2026-02-19 20:13:58.423 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:13:58 compute-0 nova_compute[188777]: 2026-02-19 20:13:58.424 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:13:58 compute-0 ovn_controller[98843]: 2026-02-19T20:13:58Z|00038|binding|INFO|Setting lport db2ce91f-7740-44a2-bab1-8455e2dfddde up in Southbound
Feb 19 20:13:58 compute-0 nova_compute[188777]: 2026-02-19 20:13:58.426 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:13:58 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:13:58.426 108175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:93:1a 192.168.0.213'], port_security=['fa:16:3e:4d:93:1a 192.168.0.213'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-5pf5gh4amqsx-kmyzbqhhqloy-unhgieiyt6e3-port-4pllgrspjkj2', 'neutron:cidrs': '192.168.0.213/24', 'neutron:device_id': '0975826c-6016-48c8-a7dd-1b10a32f91ba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec82c3b7-5389-43ab-a939-ce6cd12f9681', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-5pf5gh4amqsx-kmyzbqhhqloy-unhgieiyt6e3-port-4pllgrspjkj2', 'neutron:project_id': '59f01dee51a74ac1a9f82733f591827d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '46d7cf50-a73c-415e-96c4-398ffee7ce2d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.212'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61958255-2fb8-4c55-809a-ee04d4cf034a, chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>], logical_port=db2ce91f-7740-44a2-bab1-8455e2dfddde) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 19 20:13:58 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:13:58.429 108175 INFO neutron.agent.ovn.metadata.agent [-] Port db2ce91f-7740-44a2-bab1-8455e2dfddde in datapath ec82c3b7-5389-43ab-a939-ce6cd12f9681 bound to our chassis
Feb 19 20:13:58 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:13:58.431 108175 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec82c3b7-5389-43ab-a939-ce6cd12f9681
Feb 19 20:13:58 compute-0 systemd-udevd[242782]: Network interface NamePolicy= disabled on kernel command line.
Feb 19 20:13:58 compute-0 systemd-machined[158158]: New machine qemu-2-instance-00000002.
Feb 19 20:13:58 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:13:58.445 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[47918b19-e1a9-4edf-bf35-a02e7bbbe5d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:13:58 compute-0 NetworkManager[57033]: <info>  [1771532038.4545] device (tapdb2ce91f-77): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 19 20:13:58 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Feb 19 20:13:58 compute-0 NetworkManager[57033]: <info>  [1771532038.4597] device (tapdb2ce91f-77): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 19 20:13:58 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:13:58.473 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[813c7136-02c0-443b-b4d9-3e08f61a1c48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:13:58 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:13:58.476 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[e891f42e-3760-4dd0-90e6-0025464d546d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:13:58 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:13:58.497 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[4fbfc52f-c1e5-472d-89b1-d916c41db487]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:13:58 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:13:58.514 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[7ddff3df-d573-4b3b-8529-576ad184a3ed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec82c3b7-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8a:e7:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 532, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 532, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 348344, 'reachable_time': 38775, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242794, 'error': None, 'target': 'ovnmeta-ec82c3b7-5389-43ab-a939-ce6cd12f9681', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:13:58 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:13:58.528 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[9f6f58b9-42a4-46f4-bb3c-7a1ee5319414]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapec82c3b7-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 348361, 'tstamp': 348361}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242795, 'error': None, 'target': 'ovnmeta-ec82c3b7-5389-43ab-a939-ce6cd12f9681', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapec82c3b7-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 348365, 'tstamp': 348365}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242795, 'error': None, 'target': 'ovnmeta-ec82c3b7-5389-43ab-a939-ce6cd12f9681', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:13:58 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:13:58.531 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec82c3b7-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:13:58 compute-0 nova_compute[188777]: 2026-02-19 20:13:58.533 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:13:58 compute-0 nova_compute[188777]: 2026-02-19 20:13:58.534 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:13:58 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:13:58.535 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec82c3b7-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:13:58 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:13:58.536 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 19 20:13:58 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:13:58.537 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec82c3b7-50, col_values=(('external_ids', {'iface-id': 'a1c774de-4b7d-47b5-b88c-3f5d9b5c3dce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:13:58 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:13:58.537 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 19 20:13:58 compute-0 nova_compute[188777]: 2026-02-19 20:13:58.810 188781 DEBUG nova.virt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Emitting event <LifecycleEvent: 1771532038.809293, 0975826c-6016-48c8-a7dd-1b10a32f91ba => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:13:58 compute-0 nova_compute[188777]: 2026-02-19 20:13:58.810 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] VM Started (Lifecycle Event)
Feb 19 20:13:58 compute-0 nova_compute[188777]: 2026-02-19 20:13:58.831 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:13:58 compute-0 nova_compute[188777]: 2026-02-19 20:13:58.838 188781 DEBUG nova.virt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Emitting event <LifecycleEvent: 1771532038.8094492, 0975826c-6016-48c8-a7dd-1b10a32f91ba => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:13:58 compute-0 nova_compute[188777]: 2026-02-19 20:13:58.839 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] VM Paused (Lifecycle Event)
Feb 19 20:13:58 compute-0 nova_compute[188777]: 2026-02-19 20:13:58.861 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:13:58 compute-0 nova_compute[188777]: 2026-02-19 20:13:58.867 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 19 20:13:58 compute-0 nova_compute[188777]: 2026-02-19 20:13:58.888 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 19 20:13:59 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:13:59.304 108175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1e:ad:15', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:0d:ba:1d:25:53'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 19 20:13:59 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:13:59.305 108175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 19 20:13:59 compute-0 nova_compute[188777]: 2026-02-19 20:13:59.306 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:13:59 compute-0 nova_compute[188777]: 2026-02-19 20:13:59.581 188781 DEBUG nova.compute.manager [req-bf42cc9e-60f9-49d9-b8a3-e780c0c29f80 req-a919b284-6dea-46c0-b516-3893aa62b505 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Received event network-vif-plugged-db2ce91f-7740-44a2-bab1-8455e2dfddde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:13:59 compute-0 nova_compute[188777]: 2026-02-19 20:13:59.582 188781 DEBUG oslo_concurrency.lockutils [req-bf42cc9e-60f9-49d9-b8a3-e780c0c29f80 req-a919b284-6dea-46c0-b516-3893aa62b505 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "0975826c-6016-48c8-a7dd-1b10a32f91ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:13:59 compute-0 nova_compute[188777]: 2026-02-19 20:13:59.582 188781 DEBUG oslo_concurrency.lockutils [req-bf42cc9e-60f9-49d9-b8a3-e780c0c29f80 req-a919b284-6dea-46c0-b516-3893aa62b505 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "0975826c-6016-48c8-a7dd-1b10a32f91ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:13:59 compute-0 nova_compute[188777]: 2026-02-19 20:13:59.583 188781 DEBUG oslo_concurrency.lockutils [req-bf42cc9e-60f9-49d9-b8a3-e780c0c29f80 req-a919b284-6dea-46c0-b516-3893aa62b505 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "0975826c-6016-48c8-a7dd-1b10a32f91ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:13:59 compute-0 nova_compute[188777]: 2026-02-19 20:13:59.583 188781 DEBUG nova.compute.manager [req-bf42cc9e-60f9-49d9-b8a3-e780c0c29f80 req-a919b284-6dea-46c0-b516-3893aa62b505 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Processing event network-vif-plugged-db2ce91f-7740-44a2-bab1-8455e2dfddde _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 19 20:13:59 compute-0 nova_compute[188777]: 2026-02-19 20:13:59.583 188781 DEBUG nova.compute.manager [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 19 20:13:59 compute-0 nova_compute[188777]: 2026-02-19 20:13:59.591 188781 DEBUG nova.virt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Emitting event <LifecycleEvent: 1771532039.5906951, 0975826c-6016-48c8-a7dd-1b10a32f91ba => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:13:59 compute-0 nova_compute[188777]: 2026-02-19 20:13:59.591 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] VM Resumed (Lifecycle Event)
Feb 19 20:13:59 compute-0 nova_compute[188777]: 2026-02-19 20:13:59.592 188781 DEBUG nova.virt.libvirt.driver [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 19 20:13:59 compute-0 nova_compute[188777]: 2026-02-19 20:13:59.597 188781 INFO nova.virt.libvirt.driver [-] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Instance spawned successfully.
Feb 19 20:13:59 compute-0 nova_compute[188777]: 2026-02-19 20:13:59.598 188781 DEBUG nova.virt.libvirt.driver [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 19 20:13:59 compute-0 nova_compute[188777]: 2026-02-19 20:13:59.619 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:13:59 compute-0 nova_compute[188777]: 2026-02-19 20:13:59.626 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 19 20:13:59 compute-0 nova_compute[188777]: 2026-02-19 20:13:59.662 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 19 20:13:59 compute-0 nova_compute[188777]: 2026-02-19 20:13:59.672 188781 DEBUG nova.virt.libvirt.driver [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:13:59 compute-0 nova_compute[188777]: 2026-02-19 20:13:59.673 188781 DEBUG nova.virt.libvirt.driver [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:13:59 compute-0 nova_compute[188777]: 2026-02-19 20:13:59.673 188781 DEBUG nova.virt.libvirt.driver [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:13:59 compute-0 nova_compute[188777]: 2026-02-19 20:13:59.674 188781 DEBUG nova.virt.libvirt.driver [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:13:59 compute-0 nova_compute[188777]: 2026-02-19 20:13:59.675 188781 DEBUG nova.virt.libvirt.driver [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:13:59 compute-0 nova_compute[188777]: 2026-02-19 20:13:59.675 188781 DEBUG nova.virt.libvirt.driver [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:13:59 compute-0 nova_compute[188777]: 2026-02-19 20:13:59.741 188781 INFO nova.compute.manager [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Took 6.64 seconds to spawn the instance on the hypervisor.
Feb 19 20:13:59 compute-0 nova_compute[188777]: 2026-02-19 20:13:59.742 188781 DEBUG nova.compute.manager [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:13:59 compute-0 podman[204724]: time="2026-02-19T20:13:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:13:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:13:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 19 20:13:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:13:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4350 "" "Go-http-client/1.1"
Feb 19 20:13:59 compute-0 nova_compute[188777]: 2026-02-19 20:13:59.800 188781 INFO nova.compute.manager [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Took 7.12 seconds to build instance.
Feb 19 20:13:59 compute-0 nova_compute[188777]: 2026-02-19 20:13:59.816 188781 DEBUG oslo_concurrency.lockutils [None req-c044a091-17bb-426a-8168-3b67cd4cf03b 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "0975826c-6016-48c8-a7dd-1b10a32f91ba" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.206s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:13:59 compute-0 nova_compute[188777]: 2026-02-19 20:13:59.983 188781 DEBUG nova.network.neutron [req-23499c98-8c4d-47ae-9243-91029d650100 req-fd0c88dd-803c-4344-95f3-e85d6571a966 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Updated VIF entry in instance network info cache for port db2ce91f-7740-44a2-bab1-8455e2dfddde. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 19 20:13:59 compute-0 nova_compute[188777]: 2026-02-19 20:13:59.983 188781 DEBUG nova.network.neutron [req-23499c98-8c4d-47ae-9243-91029d650100 req-fd0c88dd-803c-4344-95f3-e85d6571a966 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Updating instance_info_cache with network_info: [{"id": "db2ce91f-7740-44a2-bab1-8455e2dfddde", "address": "fa:16:3e:4d:93:1a", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.213", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb2ce91f-77", "ovs_interfaceid": "db2ce91f-7740-44a2-bab1-8455e2dfddde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:14:00 compute-0 nova_compute[188777]: 2026-02-19 20:14:00.005 188781 DEBUG oslo_concurrency.lockutils [req-23499c98-8c4d-47ae-9243-91029d650100 req-fd0c88dd-803c-4344-95f3-e85d6571a966 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Releasing lock "refresh_cache-0975826c-6016-48c8-a7dd-1b10a32f91ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:14:00 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:14:00.309 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e2fe6bb6-fad0-4563-8388-215a30f03e3f, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:14:01 compute-0 openstack_network_exporter[207898]: ERROR   20:14:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:14:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:14:01 compute-0 openstack_network_exporter[207898]: ERROR   20:14:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:14:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:14:01 compute-0 nova_compute[188777]: 2026-02-19 20:14:01.652 188781 DEBUG nova.compute.manager [req-a0d37da1-6007-4008-915b-9c2c5228bdbd req-ea02da42-6f94-4726-9ce5-f59750f3e0cb 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Received event network-vif-plugged-db2ce91f-7740-44a2-bab1-8455e2dfddde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:14:01 compute-0 nova_compute[188777]: 2026-02-19 20:14:01.653 188781 DEBUG oslo_concurrency.lockutils [req-a0d37da1-6007-4008-915b-9c2c5228bdbd req-ea02da42-6f94-4726-9ce5-f59750f3e0cb 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "0975826c-6016-48c8-a7dd-1b10a32f91ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:14:01 compute-0 nova_compute[188777]: 2026-02-19 20:14:01.653 188781 DEBUG oslo_concurrency.lockutils [req-a0d37da1-6007-4008-915b-9c2c5228bdbd req-ea02da42-6f94-4726-9ce5-f59750f3e0cb 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "0975826c-6016-48c8-a7dd-1b10a32f91ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:14:01 compute-0 nova_compute[188777]: 2026-02-19 20:14:01.654 188781 DEBUG oslo_concurrency.lockutils [req-a0d37da1-6007-4008-915b-9c2c5228bdbd req-ea02da42-6f94-4726-9ce5-f59750f3e0cb 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "0975826c-6016-48c8-a7dd-1b10a32f91ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:14:01 compute-0 nova_compute[188777]: 2026-02-19 20:14:01.654 188781 DEBUG nova.compute.manager [req-a0d37da1-6007-4008-915b-9c2c5228bdbd req-ea02da42-6f94-4726-9ce5-f59750f3e0cb 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] No waiting events found dispatching network-vif-plugged-db2ce91f-7740-44a2-bab1-8455e2dfddde pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 19 20:14:01 compute-0 nova_compute[188777]: 2026-02-19 20:14:01.655 188781 WARNING nova.compute.manager [req-a0d37da1-6007-4008-915b-9c2c5228bdbd req-ea02da42-6f94-4726-9ce5-f59750f3e0cb 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Received unexpected event network-vif-plugged-db2ce91f-7740-44a2-bab1-8455e2dfddde for instance with vm_state active and task_state None.
Feb 19 20:14:02 compute-0 nova_compute[188777]: 2026-02-19 20:14:02.633 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:14:02 compute-0 nova_compute[188777]: 2026-02-19 20:14:02.778 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:14:07 compute-0 nova_compute[188777]: 2026-02-19 20:14:07.635 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:14:07 compute-0 nova_compute[188777]: 2026-02-19 20:14:07.779 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:14:10 compute-0 podman[242805]: 2026-02-19 20:14:10.402969919 +0000 UTC m=+0.078767767 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, managed_by=edpm_ansible, architecture=x86_64, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, version=9.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2026-02-05T04:57:10Z)
Feb 19 20:14:10 compute-0 podman[242806]: 2026-02-19 20:14:10.436257858 +0000 UTC m=+0.106678689 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 19 20:14:12 compute-0 nova_compute[188777]: 2026-02-19 20:14:12.637 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:14:12 compute-0 nova_compute[188777]: 2026-02-19 20:14:12.782 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:14:14 compute-0 podman[242850]: 2026-02-19 20:14:14.772758354 +0000 UTC m=+0.099380424 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 19 20:14:17 compute-0 nova_compute[188777]: 2026-02-19 20:14:17.640 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:14:17 compute-0 nova_compute[188777]: 2026-02-19 20:14:17.784 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:14:18 compute-0 podman[242870]: 2026-02-19 20:14:18.396141933 +0000 UTC m=+0.075692762 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, tcib_managed=true, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Feb 19 20:14:18 compute-0 podman[242869]: 2026-02-19 20:14:18.404326186 +0000 UTC m=+0.087329492 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, release=1214.1726694543, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.4, release-0.7.12=, distribution-scope=public, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Feb 19 20:14:21 compute-0 sshd-session[242904]: Invalid user ubuntu from 158.180.74.7 port 21750
Feb 19 20:14:21 compute-0 podman[242906]: 2026-02-19 20:14:21.635413374 +0000 UTC m=+0.073952867 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 19 20:14:21 compute-0 sshd-session[242904]: Received disconnect from 158.180.74.7 port 21750:11: Bye Bye [preauth]
Feb 19 20:14:21 compute-0 sshd-session[242904]: Disconnected from invalid user ubuntu 158.180.74.7 port 21750 [preauth]
Feb 19 20:14:22 compute-0 nova_compute[188777]: 2026-02-19 20:14:22.644 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:14:22 compute-0 nova_compute[188777]: 2026-02-19 20:14:22.786 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:14:24 compute-0 podman[242928]: 2026-02-19 20:14:24.418717538 +0000 UTC m=+0.109789535 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Feb 19 20:14:27 compute-0 nova_compute[188777]: 2026-02-19 20:14:27.647 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:14:27 compute-0 nova_compute[188777]: 2026-02-19 20:14:27.789 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:14:28 compute-0 ovn_controller[98843]: 2026-02-19T20:14:28Z|00039|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Feb 19 20:14:29 compute-0 podman[242949]: 2026-02-19 20:14:29.456125516 +0000 UTC m=+0.136124299 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller)
Feb 19 20:14:29 compute-0 podman[204724]: time="2026-02-19T20:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:14:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 19 20:14:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4356 "" "Go-http-client/1.1"
Feb 19 20:14:29 compute-0 ovn_controller[98843]: 2026-02-19T20:14:29Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4d:93:1a 192.168.0.213
Feb 19 20:14:29 compute-0 ovn_controller[98843]: 2026-02-19T20:14:29Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4d:93:1a 192.168.0.213
Feb 19 20:14:30 compute-0 nova_compute[188777]: 2026-02-19 20:14:30.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:14:30 compute-0 nova_compute[188777]: 2026-02-19 20:14:30.265 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 19 20:14:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:14:30.424 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:14:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:14:30.425 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:14:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:14:30.426 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:14:31 compute-0 nova_compute[188777]: 2026-02-19 20:14:31.282 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:14:31 compute-0 openstack_network_exporter[207898]: ERROR   20:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:14:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:14:31 compute-0 openstack_network_exporter[207898]: ERROR   20:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:14:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:14:32 compute-0 nova_compute[188777]: 2026-02-19 20:14:32.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:14:32 compute-0 nova_compute[188777]: 2026-02-19 20:14:32.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:14:32 compute-0 nova_compute[188777]: 2026-02-19 20:14:32.649 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:14:32 compute-0 nova_compute[188777]: 2026-02-19 20:14:32.794 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:14:33 compute-0 nova_compute[188777]: 2026-02-19 20:14:33.273 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:14:34 compute-0 nova_compute[188777]: 2026-02-19 20:14:34.259 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:14:34 compute-0 nova_compute[188777]: 2026-02-19 20:14:34.286 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:14:34 compute-0 nova_compute[188777]: 2026-02-19 20:14:34.288 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:14:34 compute-0 nova_compute[188777]: 2026-02-19 20:14:34.288 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:14:34 compute-0 nova_compute[188777]: 2026-02-19 20:14:34.289 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:14:34 compute-0 nova_compute[188777]: 2026-02-19 20:14:34.290 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:14:34 compute-0 nova_compute[188777]: 2026-02-19 20:14:34.430 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:14:34 compute-0 nova_compute[188777]: 2026-02-19 20:14:34.430 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:14:34 compute-0 nova_compute[188777]: 2026-02-19 20:14:34.431 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:14:34 compute-0 nova_compute[188777]: 2026-02-19 20:14:34.431 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:14:34 compute-0 nova_compute[188777]: 2026-02-19 20:14:34.537 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:14:34 compute-0 nova_compute[188777]: 2026-02-19 20:14:34.624 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:14:34 compute-0 nova_compute[188777]: 2026-02-19 20:14:34.627 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:14:34 compute-0 nova_compute[188777]: 2026-02-19 20:14:34.716 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:14:34 compute-0 nova_compute[188777]: 2026-02-19 20:14:34.718 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:14:34 compute-0 nova_compute[188777]: 2026-02-19 20:14:34.802 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:14:34 compute-0 nova_compute[188777]: 2026-02-19 20:14:34.804 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:14:34 compute-0 nova_compute[188777]: 2026-02-19 20:14:34.892 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:14:34 compute-0 nova_compute[188777]: 2026-02-19 20:14:34.899 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:14:34 compute-0 nova_compute[188777]: 2026-02-19 20:14:34.963 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:14:34 compute-0 nova_compute[188777]: 2026-02-19 20:14:34.964 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:14:35 compute-0 nova_compute[188777]: 2026-02-19 20:14:35.052 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:14:35 compute-0 nova_compute[188777]: 2026-02-19 20:14:35.053 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:14:35 compute-0 nova_compute[188777]: 2026-02-19 20:14:35.139 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:14:35 compute-0 nova_compute[188777]: 2026-02-19 20:14:35.141 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:14:35 compute-0 nova_compute[188777]: 2026-02-19 20:14:35.201 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:14:35 compute-0 nova_compute[188777]: 2026-02-19 20:14:35.655 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:14:35 compute-0 nova_compute[188777]: 2026-02-19 20:14:35.658 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5078MB free_disk=72.22655487060547GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:14:35 compute-0 nova_compute[188777]: 2026-02-19 20:14:35.658 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:14:35 compute-0 nova_compute[188777]: 2026-02-19 20:14:35.659 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:14:35 compute-0 nova_compute[188777]: 2026-02-19 20:14:35.936 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 5aaac42d-946d-4c6f-9bde-23b8b6613b59 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:14:35 compute-0 nova_compute[188777]: 2026-02-19 20:14:35.937 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 0975826c-6016-48c8-a7dd-1b10a32f91ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:14:35 compute-0 nova_compute[188777]: 2026-02-19 20:14:35.938 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:14:35 compute-0 nova_compute[188777]: 2026-02-19 20:14:35.939 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:14:36 compute-0 nova_compute[188777]: 2026-02-19 20:14:36.138 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:14:36 compute-0 nova_compute[188777]: 2026-02-19 20:14:36.152 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:14:36 compute-0 nova_compute[188777]: 2026-02-19 20:14:36.193 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:14:36 compute-0 nova_compute[188777]: 2026-02-19 20:14:36.193 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.534s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:14:36 compute-0 nova_compute[188777]: 2026-02-19 20:14:36.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:14:36 compute-0 nova_compute[188777]: 2026-02-19 20:14:36.264 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:14:36 compute-0 nova_compute[188777]: 2026-02-19 20:14:36.264 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 19 20:14:36 compute-0 nova_compute[188777]: 2026-02-19 20:14:36.433 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "refresh_cache-5aaac42d-946d-4c6f-9bde-23b8b6613b59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:14:36 compute-0 nova_compute[188777]: 2026-02-19 20:14:36.433 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquired lock "refresh_cache-5aaac42d-946d-4c6f-9bde-23b8b6613b59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:14:36 compute-0 nova_compute[188777]: 2026-02-19 20:14:36.434 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 19 20:14:36 compute-0 nova_compute[188777]: 2026-02-19 20:14:36.434 188781 DEBUG nova.objects.instance [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5aaac42d-946d-4c6f-9bde-23b8b6613b59 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:14:37 compute-0 nova_compute[188777]: 2026-02-19 20:14:37.651 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:14:37 compute-0 nova_compute[188777]: 2026-02-19 20:14:37.796 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:14:37 compute-0 nova_compute[188777]: 2026-02-19 20:14:37.822 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Updating instance_info_cache with network_info: [{"id": "10027d6c-43cc-4a7c-be42-a49c8c914f25", "address": "fa:16:3e:e4:9e:14", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.193", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10027d6c-43", "ovs_interfaceid": "10027d6c-43cc-4a7c-be42-a49c8c914f25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:14:37 compute-0 nova_compute[188777]: 2026-02-19 20:14:37.840 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Releasing lock "refresh_cache-5aaac42d-946d-4c6f-9bde-23b8b6613b59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:14:37 compute-0 nova_compute[188777]: 2026-02-19 20:14:37.841 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 19 20:14:37 compute-0 nova_compute[188777]: 2026-02-19 20:14:37.842 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:14:37 compute-0 nova_compute[188777]: 2026-02-19 20:14:37.842 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:14:37 compute-0 nova_compute[188777]: 2026-02-19 20:14:37.842 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 19 20:14:37 compute-0 nova_compute[188777]: 2026-02-19 20:14:37.859 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 19 20:14:41 compute-0 podman[243011]: 2026-02-19 20:14:41.37879197 +0000 UTC m=+0.062997199 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 19 20:14:41 compute-0 podman[243010]: 2026-02-19 20:14:41.403828284 +0000 UTC m=+0.090325163 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-05T04:57:10Z, managed_by=edpm_ansible, architecture=x86_64, release=1770267347, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, version=9.7, build-date=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Feb 19 20:14:42 compute-0 nova_compute[188777]: 2026-02-19 20:14:42.654 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:14:42 compute-0 nova_compute[188777]: 2026-02-19 20:14:42.800 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:14:45 compute-0 podman[243056]: 2026-02-19 20:14:45.42133519 +0000 UTC m=+0.099281230 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 19 20:14:47 compute-0 nova_compute[188777]: 2026-02-19 20:14:47.656 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:14:47 compute-0 nova_compute[188777]: 2026-02-19 20:14:47.803 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:14:49 compute-0 podman[243078]: 2026-02-19 20:14:49.434537081 +0000 UTC m=+0.090745537 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:14:49 compute-0 podman[243077]: 2026-02-19 20:14:49.448444811 +0000 UTC m=+0.109960441 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, name=ubi9, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=base rhel9, managed_by=edpm_ansible, release-0.7.12=, io.openshift.expose-services=, release=1214.1726694543, version=9.4, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2024-09-18T21:23:30, config_id=kepler, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Feb 19 20:14:51 compute-0 sshd-session[243117]: Received disconnect from 103.119.94.10 port 58024:11: Bye Bye [preauth]
Feb 19 20:14:51 compute-0 sshd-session[243117]: Disconnected from authenticating user root 103.119.94.10 port 58024 [preauth]
Feb 19 20:14:52 compute-0 podman[243119]: 2026-02-19 20:14:52.398748237 +0000 UTC m=+0.087255839 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 19 20:14:52 compute-0 sshd-session[243055]: error: kex_exchange_identification: read: Connection timed out
Feb 19 20:14:52 compute-0 sshd-session[243055]: banner exchange: Connection from 125.94.106.195 port 52694: Connection timed out
Feb 19 20:14:52 compute-0 nova_compute[188777]: 2026-02-19 20:14:52.658 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:14:52 compute-0 nova_compute[188777]: 2026-02-19 20:14:52.806 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:14:55 compute-0 podman[243143]: 2026-02-19 20:14:55.425366575 +0000 UTC m=+0.100347994 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Feb 19 20:14:57 compute-0 nova_compute[188777]: 2026-02-19 20:14:57.660 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:14:57 compute-0 nova_compute[188777]: 2026-02-19 20:14:57.807 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:14:59 compute-0 podman[204724]: time="2026-02-19T20:14:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:14:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:14:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 19 20:14:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:14:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4358 "" "Go-http-client/1.1"
Feb 19 20:15:00 compute-0 podman[243163]: 2026-02-19 20:15:00.395320076 +0000 UTC m=+0.084094411 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2)
Feb 19 20:15:01 compute-0 openstack_network_exporter[207898]: ERROR   20:15:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:15:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:15:01 compute-0 openstack_network_exporter[207898]: ERROR   20:15:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:15:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:15:02 compute-0 nova_compute[188777]: 2026-02-19 20:15:02.663 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:15:02 compute-0 nova_compute[188777]: 2026-02-19 20:15:02.810 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:15:07 compute-0 nova_compute[188777]: 2026-02-19 20:15:07.666 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:15:07 compute-0 nova_compute[188777]: 2026-02-19 20:15:07.814 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:15:12 compute-0 podman[243189]: 2026-02-19 20:15:12.377653176 +0000 UTC m=+0.064341650 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, version=9.7, build-date=2026-02-05T04:57:10Z, release=1770267347, managed_by=edpm_ansible, name=ubi9/ubi-minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc.)
Feb 19 20:15:12 compute-0 podman[243190]: 2026-02-19 20:15:12.381461573 +0000 UTC m=+0.062536094 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 19 20:15:12 compute-0 nova_compute[188777]: 2026-02-19 20:15:12.668 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:15:12 compute-0 nova_compute[188777]: 2026-02-19 20:15:12.817 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:15:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:15.137 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 19 20:15:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:15.138 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 19 20:15:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:15.138 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:15:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:15.138 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fa4f6728800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:15:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:15.139 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:15:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:15.140 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:15:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:15.140 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:15:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:15.140 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:15:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:15.140 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:15:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:15:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:15:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:15:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:15:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a390>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:15:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:15:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:15:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:15:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:15:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:15:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:15.143 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:15:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:15.143 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:15:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:15.143 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:15:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:15.143 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:15:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:15.143 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:15:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:15.144 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:15:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:15.144 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:15:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:15.144 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:15:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:15.144 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:15:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:15.144 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:15:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:15.145 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '5aaac42d-946d-4c6f-9bde-23b8b6613b59', 'name': 'test_0', 'flavor': {'id': '8030bc1a-9afb-4678-ac07-8b59a1275925', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e1a79c75-2fa3-410d-9c4c-91db3eeca51d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '59f01dee51a74ac1a9f82733f591827d', 'user_id': '9f5597a45dc34ee19bcfe938afde768f', 'hostId': 'fd9f80e206ee2256ddb900effab6d3e51f96886da6d1a8f886ddbab7', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:15:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:15.147 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 0975826c-6016-48c8-a7dd-1b10a32f91ba from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Feb 19 20:15:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:15.148 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/0975826c-6016-48c8-a7dd-1b10a32f91ba -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}eb82bb0a04ff18fe5ce8169193b61d179e0542ea510a5cad5008c259e31f58a8" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.104 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Thu, 19 Feb 2026 20:15:15 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-a33156af-6a8d-4071-a4f6-d6e5aa6d4368 x-openstack-request-id: req-a33156af-6a8d-4071-a4f6-d6e5aa6d4368 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.105 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "0975826c-6016-48c8-a7dd-1b10a32f91ba", "name": "vn-h4amqsx-kmyzbqhhqloy-unhgieiyt6e3-vnf-p7rghgh5js3a", "status": "ACTIVE", "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "user_id": "9f5597a45dc34ee19bcfe938afde768f", "metadata": {"metering.server_group": "78adc0ea-8772-4283-8bd6-6dbdcecee09e"}, "hostId": "fd9f80e206ee2256ddb900effab6d3e51f96886da6d1a8f886ddbab7", "image": {"id": "e1a79c75-2fa3-410d-9c4c-91db3eeca51d", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/e1a79c75-2fa3-410d-9c4c-91db3eeca51d"}]}, "flavor": {"id": "8030bc1a-9afb-4678-ac07-8b59a1275925", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/8030bc1a-9afb-4678-ac07-8b59a1275925"}]}, "created": "2026-02-19T20:13:51Z", "updated": "2026-02-19T20:13:59Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.213", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:4d:93:1a"}, {"version": 4, "addr": "192.168.122.212", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:4d:93:1a"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/0975826c-6016-48c8-a7dd-1b10a32f91ba"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/0975826c-6016-48c8-a7dd-1b10a32f91ba"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-02-19T20:13:59.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.105 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/0975826c-6016-48c8-a7dd-1b10a32f91ba used request id req-a33156af-6a8d-4071-a4f6-d6e5aa6d4368 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.107 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '0975826c-6016-48c8-a7dd-1b10a32f91ba', 'name': 'vn-h4amqsx-kmyzbqhhqloy-unhgieiyt6e3-vnf-p7rghgh5js3a', 'flavor': {'id': '8030bc1a-9afb-4678-ac07-8b59a1275925', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e1a79c75-2fa3-410d-9c4c-91db3eeca51d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '59f01dee51a74ac1a9f82733f591827d', 'user_id': '9f5597a45dc34ee19bcfe938afde768f', 'hostId': 'fd9f80e206ee2256ddb900effab6d3e51f96886da6d1a8f886ddbab7', 'status': 'active', 'metadata': {'metering.server_group': '78adc0ea-8772-4283-8bd6-6dbdcecee09e'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.107 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.108 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.108 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.108 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.109 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-19T20:15:16.108830) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.116 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.122 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 0975826c-6016-48c8-a7dd-1b10a32f91ba / tapdb2ce91f-77 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.122 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.123 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.123 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fa4f672a480>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.124 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.124 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.124 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.124 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.125 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-02-19T20:15:16.124588) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.125 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.125 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-h4amqsx-kmyzbqhhqloy-unhgieiyt6e3-vnf-p7rghgh5js3a>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-h4amqsx-kmyzbqhhqloy-unhgieiyt6e3-vnf-p7rghgh5js3a>]
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.125 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fa4f672a180>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.126 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.126 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.126 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.126 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.126 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.127 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-19T20:15:16.126722) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.127 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.outgoing.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.128 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.128 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fa4f672bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.129 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.129 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.129 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.129 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.130 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.bytes.delta volume: 1878 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.130 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-19T20:15:16.129723) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.130 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.131 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.132 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fa4f672a270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.132 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.132 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.132 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.132 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.133 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.bytes volume: 2132 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.133 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.outgoing.bytes volume: 2146 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.134 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.135 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-19T20:15:16.132737) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.135 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fa4f6728ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.135 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.135 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.135 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.135 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.136 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-19T20:15:16.135819) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.171 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.199 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.199 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.200 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fa4f672a300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.200 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.200 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.200 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.200 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.200 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.bytes.delta volume: 2132 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.200 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.201 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.201 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fa4f672ab70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.201 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.201 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.201 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.201 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.202 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-19T20:15:16.200453) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.202 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-19T20:15:16.201841) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.230 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.230 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.231 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.262 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.263 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.263 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.264 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.264 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fa4f6728290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.264 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.265 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.265 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.265 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.265 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-19T20:15:16.265434) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.356 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.356 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.356 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 podman[243234]: 2026-02-19 20:15:16.411004764 +0000 UTC m=+0.091032496 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.435 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.436 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.436 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.437 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.437 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fa4f69216a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.437 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.437 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.437 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.437 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.437 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/cpu volume: 32760000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.438 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/cpu volume: 30310000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.438 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.438 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fa4f67286b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.438 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.439 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-19T20:15:16.437714) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.438 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a390>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.439 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a390>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.439 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.439 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.439 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-h4amqsx-kmyzbqhhqloy-unhgieiyt6e3-vnf-p7rghgh5js3a>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-h4amqsx-kmyzbqhhqloy-unhgieiyt6e3-vnf-p7rghgh5js3a>]
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.440 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fa4f67283b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.440 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.440 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.440 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.440 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.440 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.latency volume: 658474829 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.441 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-02-19T20:15:16.439614) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.441 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-19T20:15:16.440652) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.441 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.latency volume: 116712843 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.441 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.latency volume: 151528840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.442 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.latency volume: 695414045 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.442 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.latency volume: 126021412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.442 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.latency volume: 99179876 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.442 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.443 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fa4f672a120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.443 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.443 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.443 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.443 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.443 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.443 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.444 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.444 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fa4f672a1b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.444 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.444 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.444 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.445 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-19T20:15:16.443517) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.445 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-19T20:15:16.444900) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.444 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.445 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.445 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.446 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.447 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fa4f6728410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.447 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.447 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.447 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.447 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.447 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.447 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-19T20:15:16.447352) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.447 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.448 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.448 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.448 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.448 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.448 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.449 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fa4f672a150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.449 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.449 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.449 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.449 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.449 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.449 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.449 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-19T20:15:16.449368) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.450 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.450 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fa4f6728470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.450 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.450 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.450 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.450 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.450 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.450 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.451 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.451 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-19T20:15:16.450514) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.451 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.451 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.451 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.452 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.452 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fa4f68f6030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.452 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.452 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.452 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.452 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.452 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.452 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.453 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-19T20:15:16.452564) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.453 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.453 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.453 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.453 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.454 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.454 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fa4f672ab10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.454 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.454 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.454 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.454 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.454 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.454 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.455 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.455 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-19T20:15:16.454521) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.455 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.allocation volume: 21962752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.455 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.455 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.456 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.456 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fa4f6728500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.456 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.456 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.456 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.456 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.456 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.latency volume: 2413036213 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.456 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.latency volume: 10941917 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.456 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.457 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.latency volume: 1784408728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.457 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-19T20:15:16.456483) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.457 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.latency volume: 9187833 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.457 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.457 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.457 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fa4f672a0c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.458 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.458 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.458 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.458 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.458 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-19T20:15:16.458236) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.458 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.458 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.458 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.458 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fa4f6728560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.459 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.459 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.459 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.459 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.459 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-19T20:15:16.459294) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.459 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.459 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.459 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.459 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.460 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.460 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.460 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.460 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fa4f67285c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.460 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.460 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.460 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.461 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.461 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-19T20:15:16.461040) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.461 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.461 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fa4f6728620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.461 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.461 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.461 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.461 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.462 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.462 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fa4f672be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.462 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.462 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.462 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.462 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-19T20:15:16.461871) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.462 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.462 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/memory.usage volume: 48.88671875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.463 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/memory.usage volume: 49.01171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.463 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-19T20:15:16.462798) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.463 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.463 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fa4f672be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.463 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.463 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.463 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.463 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.464 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-19T20:15:16.463814) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.464 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.464 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.464 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.464 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.464 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.464 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.465 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.465 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.465 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.465 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.465 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.465 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.465 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.465 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.465 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.465 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.465 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.465 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.465 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.465 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.465 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.465 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.465 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.465 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.466 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.466 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.466 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.466 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:15:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:15:16.466 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:15:17 compute-0 nova_compute[188777]: 2026-02-19 20:15:17.670 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:15:17 compute-0 nova_compute[188777]: 2026-02-19 20:15:17.819 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:15:19 compute-0 nova_compute[188777]: 2026-02-19 20:15:19.292 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:15:19 compute-0 nova_compute[188777]: 2026-02-19 20:15:19.320 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Triggering sync for uuid 5aaac42d-946d-4c6f-9bde-23b8b6613b59 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 19 20:15:19 compute-0 nova_compute[188777]: 2026-02-19 20:15:19.321 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Triggering sync for uuid 0975826c-6016-48c8-a7dd-1b10a32f91ba _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 19 20:15:19 compute-0 nova_compute[188777]: 2026-02-19 20:15:19.322 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "5aaac42d-946d-4c6f-9bde-23b8b6613b59" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:15:19 compute-0 nova_compute[188777]: 2026-02-19 20:15:19.323 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "5aaac42d-946d-4c6f-9bde-23b8b6613b59" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:15:19 compute-0 nova_compute[188777]: 2026-02-19 20:15:19.324 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "0975826c-6016-48c8-a7dd-1b10a32f91ba" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:15:19 compute-0 nova_compute[188777]: 2026-02-19 20:15:19.325 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "0975826c-6016-48c8-a7dd-1b10a32f91ba" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:15:19 compute-0 nova_compute[188777]: 2026-02-19 20:15:19.368 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "5aaac42d-946d-4c6f-9bde-23b8b6613b59" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:15:19 compute-0 nova_compute[188777]: 2026-02-19 20:15:19.371 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "0975826c-6016-48c8-a7dd-1b10a32f91ba" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:15:20 compute-0 podman[243254]: 2026-02-19 20:15:20.407129407 +0000 UTC m=+0.087359082 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, managed_by=edpm_ansible, release-0.7.12=, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, build-date=2024-09-18T21:23:30, container_name=kepler, version=9.4, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git)
Feb 19 20:15:20 compute-0 podman[243255]: 2026-02-19 20:15:20.437642231 +0000 UTC m=+0.118991381 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:15:22 compute-0 nova_compute[188777]: 2026-02-19 20:15:22.672 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:15:22 compute-0 nova_compute[188777]: 2026-02-19 20:15:22.822 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:15:23 compute-0 podman[243297]: 2026-02-19 20:15:23.426356705 +0000 UTC m=+0.098639371 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 19 20:15:26 compute-0 podman[243320]: 2026-02-19 20:15:26.396097844 +0000 UTC m=+0.083457702 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260216, container_name=ceilometer_agent_compute, tcib_managed=true)
Feb 19 20:15:27 compute-0 nova_compute[188777]: 2026-02-19 20:15:27.674 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:15:27 compute-0 nova_compute[188777]: 2026-02-19 20:15:27.824 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:15:29 compute-0 podman[204724]: time="2026-02-19T20:15:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:15:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:15:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 19 20:15:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:15:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4358 "" "Go-http-client/1.1"
Feb 19 20:15:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:15:30.424 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:15:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:15:30.425 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:15:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:15:30.425 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:15:31 compute-0 podman[243340]: 2026-02-19 20:15:31.391543583 +0000 UTC m=+0.078415965 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260127)
Feb 19 20:15:31 compute-0 openstack_network_exporter[207898]: ERROR   20:15:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:15:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:15:31 compute-0 openstack_network_exporter[207898]: ERROR   20:15:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:15:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:15:32 compute-0 nova_compute[188777]: 2026-02-19 20:15:32.676 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:15:32 compute-0 nova_compute[188777]: 2026-02-19 20:15:32.828 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:15:33 compute-0 nova_compute[188777]: 2026-02-19 20:15:33.293 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:15:33 compute-0 nova_compute[188777]: 2026-02-19 20:15:33.294 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:15:33 compute-0 nova_compute[188777]: 2026-02-19 20:15:33.295 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:15:34 compute-0 nova_compute[188777]: 2026-02-19 20:15:34.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:15:34 compute-0 nova_compute[188777]: 2026-02-19 20:15:34.265 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:15:34 compute-0 nova_compute[188777]: 2026-02-19 20:15:34.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:15:34 compute-0 nova_compute[188777]: 2026-02-19 20:15:34.287 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:15:34 compute-0 nova_compute[188777]: 2026-02-19 20:15:34.288 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:15:34 compute-0 nova_compute[188777]: 2026-02-19 20:15:34.289 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:15:34 compute-0 nova_compute[188777]: 2026-02-19 20:15:34.289 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:15:34 compute-0 nova_compute[188777]: 2026-02-19 20:15:34.699 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:15:34 compute-0 nova_compute[188777]: 2026-02-19 20:15:34.758 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:15:34 compute-0 nova_compute[188777]: 2026-02-19 20:15:34.760 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:15:34 compute-0 nova_compute[188777]: 2026-02-19 20:15:34.810 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:15:34 compute-0 nova_compute[188777]: 2026-02-19 20:15:34.811 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:15:34 compute-0 nova_compute[188777]: 2026-02-19 20:15:34.867 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:15:34 compute-0 nova_compute[188777]: 2026-02-19 20:15:34.869 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:15:34 compute-0 nova_compute[188777]: 2026-02-19 20:15:34.917 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:15:34 compute-0 nova_compute[188777]: 2026-02-19 20:15:34.927 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:15:35 compute-0 nova_compute[188777]: 2026-02-19 20:15:35.007 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:15:35 compute-0 nova_compute[188777]: 2026-02-19 20:15:35.009 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:15:35 compute-0 nova_compute[188777]: 2026-02-19 20:15:35.092 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:15:35 compute-0 nova_compute[188777]: 2026-02-19 20:15:35.094 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:15:35 compute-0 nova_compute[188777]: 2026-02-19 20:15:35.152 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:15:35 compute-0 nova_compute[188777]: 2026-02-19 20:15:35.154 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:15:35 compute-0 nova_compute[188777]: 2026-02-19 20:15:35.210 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:15:35 compute-0 nova_compute[188777]: 2026-02-19 20:15:35.625 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:15:35 compute-0 nova_compute[188777]: 2026-02-19 20:15:35.628 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5037MB free_disk=72.22647476196289GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:15:35 compute-0 nova_compute[188777]: 2026-02-19 20:15:35.629 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:15:35 compute-0 nova_compute[188777]: 2026-02-19 20:15:35.631 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:15:35 compute-0 nova_compute[188777]: 2026-02-19 20:15:35.909 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 5aaac42d-946d-4c6f-9bde-23b8b6613b59 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:15:35 compute-0 nova_compute[188777]: 2026-02-19 20:15:35.910 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 0975826c-6016-48c8-a7dd-1b10a32f91ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:15:35 compute-0 nova_compute[188777]: 2026-02-19 20:15:35.911 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:15:35 compute-0 nova_compute[188777]: 2026-02-19 20:15:35.912 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:15:35 compute-0 nova_compute[188777]: 2026-02-19 20:15:35.973 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:15:35 compute-0 nova_compute[188777]: 2026-02-19 20:15:35.986 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:15:35 compute-0 nova_compute[188777]: 2026-02-19 20:15:35.988 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:15:35 compute-0 nova_compute[188777]: 2026-02-19 20:15:35.988 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.358s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:15:36 compute-0 nova_compute[188777]: 2026-02-19 20:15:36.989 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:15:36 compute-0 nova_compute[188777]: 2026-02-19 20:15:36.990 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:15:37 compute-0 nova_compute[188777]: 2026-02-19 20:15:37.644 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "refresh_cache-0975826c-6016-48c8-a7dd-1b10a32f91ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:15:37 compute-0 nova_compute[188777]: 2026-02-19 20:15:37.645 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquired lock "refresh_cache-0975826c-6016-48c8-a7dd-1b10a32f91ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:15:37 compute-0 nova_compute[188777]: 2026-02-19 20:15:37.646 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 19 20:15:37 compute-0 nova_compute[188777]: 2026-02-19 20:15:37.679 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:15:37 compute-0 nova_compute[188777]: 2026-02-19 20:15:37.830 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:15:39 compute-0 nova_compute[188777]: 2026-02-19 20:15:39.395 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Updating instance_info_cache with network_info: [{"id": "db2ce91f-7740-44a2-bab1-8455e2dfddde", "address": "fa:16:3e:4d:93:1a", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.213", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb2ce91f-77", "ovs_interfaceid": "db2ce91f-7740-44a2-bab1-8455e2dfddde", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:15:39 compute-0 nova_compute[188777]: 2026-02-19 20:15:39.700 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Releasing lock "refresh_cache-0975826c-6016-48c8-a7dd-1b10a32f91ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:15:39 compute-0 nova_compute[188777]: 2026-02-19 20:15:39.701 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 19 20:15:39 compute-0 nova_compute[188777]: 2026-02-19 20:15:39.703 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:15:39 compute-0 nova_compute[188777]: 2026-02-19 20:15:39.704 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:15:39 compute-0 nova_compute[188777]: 2026-02-19 20:15:39.706 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:15:42 compute-0 nova_compute[188777]: 2026-02-19 20:15:42.683 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:15:42 compute-0 nova_compute[188777]: 2026-02-19 20:15:42.833 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:15:43 compute-0 podman[243391]: 2026-02-19 20:15:43.406596367 +0000 UTC m=+0.080979425 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 19 20:15:43 compute-0 podman[243390]: 2026-02-19 20:15:43.415986517 +0000 UTC m=+0.096229006 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, managed_by=edpm_ansible, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, io.openshift.tags=minimal rhel9, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1770267347, vendor=Red Hat, Inc.)
Feb 19 20:15:44 compute-0 sshd-session[243431]: Invalid user titu from 83.235.16.111 port 57708
Feb 19 20:15:44 compute-0 sshd-session[243431]: Received disconnect from 83.235.16.111 port 57708:11: Bye Bye [preauth]
Feb 19 20:15:44 compute-0 sshd-session[243431]: Disconnected from invalid user titu 83.235.16.111 port 57708 [preauth]
Feb 19 20:15:47 compute-0 podman[243434]: 2026-02-19 20:15:47.411761939 +0000 UTC m=+0.100624813 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Feb 19 20:15:47 compute-0 nova_compute[188777]: 2026-02-19 20:15:47.685 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:15:47 compute-0 nova_compute[188777]: 2026-02-19 20:15:47.836 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:15:49 compute-0 sshd-session[243451]: Invalid user user1 from 103.250.11.249 port 51386
Feb 19 20:15:49 compute-0 sshd-session[243451]: Received disconnect from 103.250.11.249 port 51386:11: Bye Bye [preauth]
Feb 19 20:15:49 compute-0 sshd-session[243451]: Disconnected from invalid user user1 103.250.11.249 port 51386 [preauth]
Feb 19 20:15:51 compute-0 podman[243454]: 2026-02-19 20:15:51.423461823 +0000 UTC m=+0.099600311 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, name=ubi9, distribution-scope=public, io.buildah.version=1.29.0, version=9.4, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, release-0.7.12=, maintainer=Red Hat, Inc., managed_by=edpm_ansible)
Feb 19 20:15:51 compute-0 podman[243455]: 2026-02-19 20:15:51.435093082 +0000 UTC m=+0.108620339 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 19 20:15:52 compute-0 nova_compute[188777]: 2026-02-19 20:15:52.685 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:15:52 compute-0 nova_compute[188777]: 2026-02-19 20:15:52.839 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:15:54 compute-0 podman[243489]: 2026-02-19 20:15:54.381933523 +0000 UTC m=+0.065345752 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 19 20:15:57 compute-0 podman[243521]: 2026-02-19 20:15:57.408057465 +0000 UTC m=+0.086023540 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Feb 19 20:15:57 compute-0 nova_compute[188777]: 2026-02-19 20:15:57.688 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:15:57 compute-0 nova_compute[188777]: 2026-02-19 20:15:57.841 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:15:58 compute-0 sshd-session[243513]: Received disconnect from 125.31.2.160 port 34136:11: Bye Bye [preauth]
Feb 19 20:15:58 compute-0 sshd-session[243513]: Disconnected from authenticating user root 125.31.2.160 port 34136 [preauth]
Feb 19 20:15:59 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Feb 19 20:15:59 compute-0 podman[204724]: time="2026-02-19T20:15:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:15:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:15:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 19 20:15:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:15:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4346 "" "Go-http-client/1.1"
Feb 19 20:16:01 compute-0 openstack_network_exporter[207898]: ERROR   20:16:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:16:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:16:01 compute-0 openstack_network_exporter[207898]: ERROR   20:16:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:16:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:16:02 compute-0 podman[243543]: 2026-02-19 20:16:02.464613814 +0000 UTC m=+0.140513646 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb 19 20:16:02 compute-0 nova_compute[188777]: 2026-02-19 20:16:02.689 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:16:02 compute-0 nova_compute[188777]: 2026-02-19 20:16:02.844 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:16:07 compute-0 sshd-session[243515]: error: kex_exchange_identification: read: Connection timed out
Feb 19 20:16:07 compute-0 sshd-session[243515]: banner exchange: Connection from 39.182.13.137 port 14231: Connection timed out
Feb 19 20:16:07 compute-0 nova_compute[188777]: 2026-02-19 20:16:07.691 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:16:07 compute-0 nova_compute[188777]: 2026-02-19 20:16:07.846 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:16:12 compute-0 nova_compute[188777]: 2026-02-19 20:16:12.694 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:16:12 compute-0 nova_compute[188777]: 2026-02-19 20:16:12.849 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:16:14 compute-0 podman[243570]: 2026-02-19 20:16:14.41309485 +0000 UTC m=+0.090652397 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 19 20:16:14 compute-0 podman[243569]: 2026-02-19 20:16:14.418300381 +0000 UTC m=+0.102343619 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2026-02-05T04:57:10Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9/ubi-minimal, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, architecture=x86_64, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c)
Feb 19 20:16:17 compute-0 nova_compute[188777]: 2026-02-19 20:16:17.696 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:16:17 compute-0 nova_compute[188777]: 2026-02-19 20:16:17.852 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:16:18 compute-0 podman[243616]: 2026-02-19 20:16:18.419890233 +0000 UTC m=+0.102183883 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:16:21 compute-0 sshd-session[243635]: Invalid user dixi from 125.94.106.195 port 38808
Feb 19 20:16:21 compute-0 podman[243638]: 2026-02-19 20:16:21.757416156 +0000 UTC m=+0.112092952 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, name=ubi9, release-0.7.12=, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, release=1214.1726694543, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.4, distribution-scope=public)
Feb 19 20:16:21 compute-0 podman[243639]: 2026-02-19 20:16:21.764458175 +0000 UTC m=+0.117408497 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Feb 19 20:16:21 compute-0 sshd-session[243635]: Received disconnect from 125.94.106.195 port 38808:11: Bye Bye [preauth]
Feb 19 20:16:21 compute-0 sshd-session[243635]: Disconnected from invalid user dixi 125.94.106.195 port 38808 [preauth]
Feb 19 20:16:22 compute-0 nova_compute[188777]: 2026-02-19 20:16:22.698 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:16:22 compute-0 nova_compute[188777]: 2026-02-19 20:16:22.855 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:16:25 compute-0 podman[243676]: 2026-02-19 20:16:25.426397091 +0000 UTC m=+0.100748349 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 19 20:16:27 compute-0 nova_compute[188777]: 2026-02-19 20:16:27.699 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:16:27 compute-0 nova_compute[188777]: 2026-02-19 20:16:27.857 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:16:28 compute-0 podman[243700]: 2026-02-19 20:16:28.427012402 +0000 UTC m=+0.108780048 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible)
Feb 19 20:16:29 compute-0 podman[204724]: time="2026-02-19T20:16:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:16:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:16:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 19 20:16:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:16:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4358 "" "Go-http-client/1.1"
Feb 19 20:16:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:16:30.426 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:16:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:16:30.427 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:16:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:16:30.428 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:16:31 compute-0 openstack_network_exporter[207898]: ERROR   20:16:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:16:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:16:31 compute-0 openstack_network_exporter[207898]: ERROR   20:16:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:16:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:16:32 compute-0 nova_compute[188777]: 2026-02-19 20:16:32.702 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:16:32 compute-0 nova_compute[188777]: 2026-02-19 20:16:32.859 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:16:33 compute-0 podman[243720]: 2026-02-19 20:16:33.47930712 +0000 UTC m=+0.158920956 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 20:16:34 compute-0 nova_compute[188777]: 2026-02-19 20:16:34.266 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:16:34 compute-0 nova_compute[188777]: 2026-02-19 20:16:34.266 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:16:35 compute-0 nova_compute[188777]: 2026-02-19 20:16:35.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:16:35 compute-0 nova_compute[188777]: 2026-02-19 20:16:35.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:16:35 compute-0 nova_compute[188777]: 2026-02-19 20:16:35.264 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:16:36 compute-0 nova_compute[188777]: 2026-02-19 20:16:36.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:16:36 compute-0 nova_compute[188777]: 2026-02-19 20:16:36.263 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:16:36 compute-0 nova_compute[188777]: 2026-02-19 20:16:36.264 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 19 20:16:37 compute-0 nova_compute[188777]: 2026-02-19 20:16:37.076 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "refresh_cache-5aaac42d-946d-4c6f-9bde-23b8b6613b59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:16:37 compute-0 nova_compute[188777]: 2026-02-19 20:16:37.077 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquired lock "refresh_cache-5aaac42d-946d-4c6f-9bde-23b8b6613b59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:16:37 compute-0 nova_compute[188777]: 2026-02-19 20:16:37.078 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 19 20:16:37 compute-0 nova_compute[188777]: 2026-02-19 20:16:37.079 188781 DEBUG nova.objects.instance [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5aaac42d-946d-4c6f-9bde-23b8b6613b59 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:16:37 compute-0 nova_compute[188777]: 2026-02-19 20:16:37.705 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:16:37 compute-0 nova_compute[188777]: 2026-02-19 20:16:37.861 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:16:38 compute-0 nova_compute[188777]: 2026-02-19 20:16:38.412 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Updating instance_info_cache with network_info: [{"id": "10027d6c-43cc-4a7c-be42-a49c8c914f25", "address": "fa:16:3e:e4:9e:14", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.193", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10027d6c-43", "ovs_interfaceid": "10027d6c-43cc-4a7c-be42-a49c8c914f25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:16:38 compute-0 nova_compute[188777]: 2026-02-19 20:16:38.442 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Releasing lock "refresh_cache-5aaac42d-946d-4c6f-9bde-23b8b6613b59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:16:38 compute-0 nova_compute[188777]: 2026-02-19 20:16:38.443 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 19 20:16:38 compute-0 nova_compute[188777]: 2026-02-19 20:16:38.444 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:16:38 compute-0 nova_compute[188777]: 2026-02-19 20:16:38.444 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:16:38 compute-0 nova_compute[188777]: 2026-02-19 20:16:38.445 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:16:38 compute-0 nova_compute[188777]: 2026-02-19 20:16:38.469 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:16:38 compute-0 nova_compute[188777]: 2026-02-19 20:16:38.470 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:16:38 compute-0 nova_compute[188777]: 2026-02-19 20:16:38.471 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:16:38 compute-0 nova_compute[188777]: 2026-02-19 20:16:38.471 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:16:38 compute-0 nova_compute[188777]: 2026-02-19 20:16:38.555 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:16:38 compute-0 nova_compute[188777]: 2026-02-19 20:16:38.641 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:16:38 compute-0 nova_compute[188777]: 2026-02-19 20:16:38.642 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:16:38 compute-0 nova_compute[188777]: 2026-02-19 20:16:38.713 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:16:38 compute-0 nova_compute[188777]: 2026-02-19 20:16:38.715 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:16:38 compute-0 nova_compute[188777]: 2026-02-19 20:16:38.796 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:16:38 compute-0 nova_compute[188777]: 2026-02-19 20:16:38.799 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:16:38 compute-0 nova_compute[188777]: 2026-02-19 20:16:38.848 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:16:38 compute-0 nova_compute[188777]: 2026-02-19 20:16:38.858 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:16:38 compute-0 nova_compute[188777]: 2026-02-19 20:16:38.908 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:16:38 compute-0 nova_compute[188777]: 2026-02-19 20:16:38.910 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:16:38 compute-0 nova_compute[188777]: 2026-02-19 20:16:38.958 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:16:38 compute-0 nova_compute[188777]: 2026-02-19 20:16:38.959 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:16:39 compute-0 nova_compute[188777]: 2026-02-19 20:16:39.009 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:16:39 compute-0 nova_compute[188777]: 2026-02-19 20:16:39.011 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:16:39 compute-0 nova_compute[188777]: 2026-02-19 20:16:39.097 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:16:39 compute-0 nova_compute[188777]: 2026-02-19 20:16:39.527 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:16:39 compute-0 nova_compute[188777]: 2026-02-19 20:16:39.530 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5053MB free_disk=72.22647094726562GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:16:39 compute-0 nova_compute[188777]: 2026-02-19 20:16:39.531 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:16:39 compute-0 nova_compute[188777]: 2026-02-19 20:16:39.532 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:16:39 compute-0 nova_compute[188777]: 2026-02-19 20:16:39.631 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 5aaac42d-946d-4c6f-9bde-23b8b6613b59 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:16:39 compute-0 nova_compute[188777]: 2026-02-19 20:16:39.632 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 0975826c-6016-48c8-a7dd-1b10a32f91ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:16:39 compute-0 nova_compute[188777]: 2026-02-19 20:16:39.632 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:16:39 compute-0 nova_compute[188777]: 2026-02-19 20:16:39.633 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:16:39 compute-0 nova_compute[188777]: 2026-02-19 20:16:39.692 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:16:39 compute-0 nova_compute[188777]: 2026-02-19 20:16:39.707 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:16:39 compute-0 nova_compute[188777]: 2026-02-19 20:16:39.710 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:16:39 compute-0 nova_compute[188777]: 2026-02-19 20:16:39.711 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.179s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:16:40 compute-0 nova_compute[188777]: 2026-02-19 20:16:40.708 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:16:40 compute-0 nova_compute[188777]: 2026-02-19 20:16:40.727 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:16:42 compute-0 nova_compute[188777]: 2026-02-19 20:16:42.707 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:16:42 compute-0 nova_compute[188777]: 2026-02-19 20:16:42.863 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:16:44 compute-0 podman[243769]: 2026-02-19 20:16:44.742276213 +0000 UTC m=+0.078673614 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, managed_by=edpm_ansible, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, name=ubi9/ubi-minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.7, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers)
Feb 19 20:16:44 compute-0 podman[243770]: 2026-02-19 20:16:44.768316882 +0000 UTC m=+0.100443069 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 19 20:16:47 compute-0 nova_compute[188777]: 2026-02-19 20:16:47.710 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:16:47 compute-0 nova_compute[188777]: 2026-02-19 20:16:47.866 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:16:49 compute-0 podman[243811]: 2026-02-19 20:16:49.390060661 +0000 UTC m=+0.070754988 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 19 20:16:52 compute-0 podman[243832]: 2026-02-19 20:16:52.395967976 +0000 UTC m=+0.081137420 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, container_name=ceilometer_agent_ipmi)
Feb 19 20:16:52 compute-0 podman[243831]: 2026-02-19 20:16:52.402442328 +0000 UTC m=+0.094490795 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, name=ubi9, maintainer=Red Hat, Inc., version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, container_name=kepler, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler)
Feb 19 20:16:52 compute-0 nova_compute[188777]: 2026-02-19 20:16:52.712 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:16:52 compute-0 nova_compute[188777]: 2026-02-19 20:16:52.869 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:16:56 compute-0 podman[243865]: 2026-02-19 20:16:56.395801195 +0000 UTC m=+0.069841170 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 19 20:16:57 compute-0 nova_compute[188777]: 2026-02-19 20:16:57.715 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:16:57 compute-0 nova_compute[188777]: 2026-02-19 20:16:57.871 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:16:59 compute-0 podman[243888]: 2026-02-19 20:16:59.430954288 +0000 UTC m=+0.119658326 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute)
Feb 19 20:16:59 compute-0 podman[204724]: time="2026-02-19T20:16:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:16:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:16:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 19 20:16:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:16:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4352 "" "Go-http-client/1.1"
Feb 19 20:17:01 compute-0 openstack_network_exporter[207898]: ERROR   20:17:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:17:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:17:01 compute-0 openstack_network_exporter[207898]: ERROR   20:17:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:17:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:17:02 compute-0 nova_compute[188777]: 2026-02-19 20:17:02.716 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:17:02 compute-0 nova_compute[188777]: 2026-02-19 20:17:02.873 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:17:04 compute-0 podman[243906]: 2026-02-19 20:17:04.431716916 +0000 UTC m=+0.115459636 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Feb 19 20:17:07 compute-0 nova_compute[188777]: 2026-02-19 20:17:07.719 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:17:07 compute-0 nova_compute[188777]: 2026-02-19 20:17:07.875 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:17:12 compute-0 nova_compute[188777]: 2026-02-19 20:17:12.722 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:17:12 compute-0 nova_compute[188777]: 2026-02-19 20:17:12.878 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.138 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.138 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.138 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.139 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fa4f6728800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.140 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.140 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.140 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.140 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a390>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.143 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.143 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.143 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.143 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.143 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.143 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.143 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.143 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.144 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.147 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '5aaac42d-946d-4c6f-9bde-23b8b6613b59', 'name': 'test_0', 'flavor': {'id': '8030bc1a-9afb-4678-ac07-8b59a1275925', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e1a79c75-2fa3-410d-9c4c-91db3eeca51d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '59f01dee51a74ac1a9f82733f591827d', 'user_id': '9f5597a45dc34ee19bcfe938afde768f', 'hostId': 'fd9f80e206ee2256ddb900effab6d3e51f96886da6d1a8f886ddbab7', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.153 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '0975826c-6016-48c8-a7dd-1b10a32f91ba', 'name': 'vn-h4amqsx-kmyzbqhhqloy-unhgieiyt6e3-vnf-p7rghgh5js3a', 'flavor': {'id': '8030bc1a-9afb-4678-ac07-8b59a1275925', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e1a79c75-2fa3-410d-9c4c-91db3eeca51d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '59f01dee51a74ac1a9f82733f591827d', 'user_id': '9f5597a45dc34ee19bcfe938afde768f', 'hostId': 'fd9f80e206ee2256ddb900effab6d3e51f96886da6d1a8f886ddbab7', 'status': 'active', 'metadata': {'metering.server_group': '78adc0ea-8772-4283-8bd6-6dbdcecee09e'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.154 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.154 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.154 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.154 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.155 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-19T20:17:15.154864) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.163 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.170 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.171 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.171 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fa4f672a480>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.171 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.171 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fa4f672a180>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.172 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.172 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.172 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.172 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.173 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.173 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.outgoing.packets volume: 40 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.174 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.174 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fa4f672bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.174 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.175 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.175 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.175 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-19T20:17:15.172794) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.176 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.176 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.177 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.incoming.bytes.delta volume: 3363 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.177 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.178 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fa4f672a270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.178 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.178 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.179 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-19T20:17:15.175963) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.179 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.179 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.179 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.bytes volume: 2202 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.179 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-19T20:17:15.179248) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.180 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.outgoing.bytes volume: 4694 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.180 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.181 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fa4f6728ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.181 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.181 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.181 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.182 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.183 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-19T20:17:15.182149) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.222 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.248 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.249 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.249 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fa4f672a300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.249 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.250 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.250 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.250 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.251 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.251 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-19T20:17:15.250411) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.251 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.outgoing.bytes.delta volume: 2548 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.252 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.252 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fa4f672ab70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.252 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.253 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.253 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.253 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.254 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-19T20:17:15.253750) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.292 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.293 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.293 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.342 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.343 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.343 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.344 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.345 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fa4f6728290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.345 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.345 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.346 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.346 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.347 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-19T20:17:15.346396) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:17:15 compute-0 podman[243930]: 2026-02-19 20:17:15.416121771 +0000 UTC m=+0.099115739 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, version=9.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9/ubi-minimal, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, distribution-scope=public)
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.428 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.429 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.429 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 podman[243931]: 2026-02-19 20:17:15.450625272 +0000 UTC m=+0.125980493 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.496 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.496 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.496 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.497 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.497 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fa4f69216a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.497 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.497 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.497 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.497 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.497 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/cpu volume: 34330000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.498 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/cpu volume: 143810000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.498 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.498 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-19T20:17:15.497864) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.498 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fa4f67286b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.498 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.499 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fa4f67283b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.499 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.499 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.499 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.499 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.499 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.latency volume: 658474829 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.499 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.latency volume: 116712843 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.499 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.latency volume: 151528840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.500 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.latency volume: 699163782 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.500 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.latency volume: 126021412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.500 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.latency volume: 99179876 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.500 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.500 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-19T20:17:15.499327) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.500 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fa4f672a120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.501 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.501 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.501 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.501 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.501 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.501 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.502 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.502 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fa4f672a1b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.502 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.502 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.502 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.502 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.502 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.503 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.503 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.503 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-19T20:17:15.501305) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.503 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fa4f6728410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.503 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.503 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.504 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.504 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.504 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.504 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.504 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-19T20:17:15.502757) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.504 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.505 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.505 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.505 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.505 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.506 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fa4f672a150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.506 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.506 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.506 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.506 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.506 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.507 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.incoming.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.507 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-19T20:17:15.504143) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.507 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-19T20:17:15.506569) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.507 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.507 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fa4f6728470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.507 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.507 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.507 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.507 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.508 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.508 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.508 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.508 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.509 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.509 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.509 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.509 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fa4f68f6030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.510 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.510 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.510 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.510 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.510 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.510 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.511 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.511 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.511 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.511 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.512 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.512 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fa4f672ab10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.512 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.512 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.512 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.512 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.513 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.513 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.513 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-19T20:17:15.507924) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.513 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.513 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-19T20:17:15.510396) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.513 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-19T20:17:15.512829) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.513 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.allocation volume: 21962752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.513 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.514 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.514 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.514 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fa4f6728500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.514 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.514 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.514 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.515 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.515 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.latency volume: 2413036213 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.515 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.latency volume: 10941917 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.515 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-19T20:17:15.515055) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.515 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.516 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.latency volume: 1796614628 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.516 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.latency volume: 9187833 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.516 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.516 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.517 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fa4f672a0c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.517 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.517 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.517 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.517 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.517 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.517 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.518 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.518 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fa4f6728560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.518 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.518 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.518 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.518 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-19T20:17:15.517427) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.518 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.518 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.519 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.519 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.519 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.requests volume: 237 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.520 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.520 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.520 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.520 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fa4f67285c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.520 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.521 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.521 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.521 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.521 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.521 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fa4f6728620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.522 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.522 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.522 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.522 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.522 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.522 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fa4f672be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.523 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.523 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.523 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.523 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.523 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/memory.usage volume: 48.88671875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.524 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/memory.usage volume: 49.08984375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.524 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-19T20:17:15.518848) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.524 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.524 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-19T20:17:15.521356) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.524 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fa4f672be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.524 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.524 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-19T20:17:15.522414) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.524 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.524 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-19T20:17:15.523708) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.524 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.525 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.525 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.525 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.incoming.bytes volume: 4849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.525 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.526 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.527 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.527 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.527 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.527 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.527 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.527 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.527 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.527 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.528 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.528 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.528 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.528 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.528 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.528 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.528 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.528 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.528 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.529 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.529 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.529 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.529 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.529 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.529 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.529 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.529 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:17:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:17:15.530 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-19T20:17:15.524984) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:17:17 compute-0 nova_compute[188777]: 2026-02-19 20:17:17.725 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:17:17 compute-0 nova_compute[188777]: 2026-02-19 20:17:17.880 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:17:20 compute-0 podman[243975]: 2026-02-19 20:17:20.446089735 +0000 UTC m=+0.126436287 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 19 20:17:22 compute-0 nova_compute[188777]: 2026-02-19 20:17:22.727 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:17:22 compute-0 nova_compute[188777]: 2026-02-19 20:17:22.883 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:17:23 compute-0 podman[243995]: 2026-02-19 20:17:23.408845141 +0000 UTC m=+0.090947495 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:17:23 compute-0 podman[243994]: 2026-02-19 20:17:23.417804439 +0000 UTC m=+0.108498390 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, release-0.7.12=, version=9.4, build-date=2024-09-18T21:23:30, release=1214.1726694543, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, container_name=kepler, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, managed_by=edpm_ansible, name=ubi9)
Feb 19 20:17:27 compute-0 podman[244033]: 2026-02-19 20:17:27.407789431 +0000 UTC m=+0.089400327 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 19 20:17:27 compute-0 nova_compute[188777]: 2026-02-19 20:17:27.729 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:17:27 compute-0 nova_compute[188777]: 2026-02-19 20:17:27.886 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:17:29 compute-0 podman[204724]: time="2026-02-19T20:17:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:17:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:17:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 19 20:17:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:17:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4360 "" "Go-http-client/1.1"
Feb 19 20:17:30 compute-0 podman[244057]: 2026-02-19 20:17:30.401306932 +0000 UTC m=+0.088202140 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image)
Feb 19 20:17:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:17:30.428 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:17:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:17:30.428 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:17:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:17:30.429 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:17:31 compute-0 openstack_network_exporter[207898]: ERROR   20:17:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:17:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:17:31 compute-0 openstack_network_exporter[207898]: ERROR   20:17:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:17:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:17:32 compute-0 nova_compute[188777]: 2026-02-19 20:17:32.732 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:17:32 compute-0 nova_compute[188777]: 2026-02-19 20:17:32.888 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:17:35 compute-0 nova_compute[188777]: 2026-02-19 20:17:35.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:17:35 compute-0 nova_compute[188777]: 2026-02-19 20:17:35.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:17:35 compute-0 nova_compute[188777]: 2026-02-19 20:17:35.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:17:35 compute-0 nova_compute[188777]: 2026-02-19 20:17:35.265 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:17:35 compute-0 podman[244076]: 2026-02-19 20:17:35.40432161 +0000 UTC m=+0.097231300 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 19 20:17:36 compute-0 nova_compute[188777]: 2026-02-19 20:17:36.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:17:37 compute-0 nova_compute[188777]: 2026-02-19 20:17:37.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:17:37 compute-0 nova_compute[188777]: 2026-02-19 20:17:37.263 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:17:37 compute-0 nova_compute[188777]: 2026-02-19 20:17:37.634 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "refresh_cache-0975826c-6016-48c8-a7dd-1b10a32f91ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:17:37 compute-0 nova_compute[188777]: 2026-02-19 20:17:37.636 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquired lock "refresh_cache-0975826c-6016-48c8-a7dd-1b10a32f91ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:17:37 compute-0 nova_compute[188777]: 2026-02-19 20:17:37.637 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 19 20:17:37 compute-0 nova_compute[188777]: 2026-02-19 20:17:37.733 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:17:37 compute-0 nova_compute[188777]: 2026-02-19 20:17:37.890 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:17:39 compute-0 nova_compute[188777]: 2026-02-19 20:17:39.380 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Updating instance_info_cache with network_info: [{"id": "db2ce91f-7740-44a2-bab1-8455e2dfddde", "address": "fa:16:3e:4d:93:1a", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.213", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb2ce91f-77", "ovs_interfaceid": "db2ce91f-7740-44a2-bab1-8455e2dfddde", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:17:39 compute-0 nova_compute[188777]: 2026-02-19 20:17:39.399 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Releasing lock "refresh_cache-0975826c-6016-48c8-a7dd-1b10a32f91ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:17:39 compute-0 nova_compute[188777]: 2026-02-19 20:17:39.401 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 19 20:17:39 compute-0 nova_compute[188777]: 2026-02-19 20:17:39.403 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:17:39 compute-0 nova_compute[188777]: 2026-02-19 20:17:39.405 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:17:39 compute-0 nova_compute[188777]: 2026-02-19 20:17:39.406 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:17:39 compute-0 nova_compute[188777]: 2026-02-19 20:17:39.431 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:17:39 compute-0 nova_compute[188777]: 2026-02-19 20:17:39.433 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:17:39 compute-0 nova_compute[188777]: 2026-02-19 20:17:39.433 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:17:39 compute-0 nova_compute[188777]: 2026-02-19 20:17:39.434 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:17:39 compute-0 nova_compute[188777]: 2026-02-19 20:17:39.534 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:17:39 compute-0 nova_compute[188777]: 2026-02-19 20:17:39.617 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:17:39 compute-0 nova_compute[188777]: 2026-02-19 20:17:39.618 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:17:39 compute-0 nova_compute[188777]: 2026-02-19 20:17:39.700 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:17:39 compute-0 nova_compute[188777]: 2026-02-19 20:17:39.701 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:17:39 compute-0 nova_compute[188777]: 2026-02-19 20:17:39.761 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:17:39 compute-0 nova_compute[188777]: 2026-02-19 20:17:39.763 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:17:39 compute-0 nova_compute[188777]: 2026-02-19 20:17:39.846 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:17:39 compute-0 nova_compute[188777]: 2026-02-19 20:17:39.853 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:17:39 compute-0 nova_compute[188777]: 2026-02-19 20:17:39.900 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:17:39 compute-0 nova_compute[188777]: 2026-02-19 20:17:39.902 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:17:39 compute-0 nova_compute[188777]: 2026-02-19 20:17:39.967 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:17:39 compute-0 nova_compute[188777]: 2026-02-19 20:17:39.968 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:17:40 compute-0 nova_compute[188777]: 2026-02-19 20:17:40.032 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:17:40 compute-0 nova_compute[188777]: 2026-02-19 20:17:40.034 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:17:40 compute-0 nova_compute[188777]: 2026-02-19 20:17:40.083 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:17:40 compute-0 nova_compute[188777]: 2026-02-19 20:17:40.457 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:17:40 compute-0 nova_compute[188777]: 2026-02-19 20:17:40.458 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5058MB free_disk=72.22647094726562GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:17:40 compute-0 nova_compute[188777]: 2026-02-19 20:17:40.459 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:17:40 compute-0 nova_compute[188777]: 2026-02-19 20:17:40.459 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:17:40 compute-0 nova_compute[188777]: 2026-02-19 20:17:40.532 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 5aaac42d-946d-4c6f-9bde-23b8b6613b59 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:17:40 compute-0 nova_compute[188777]: 2026-02-19 20:17:40.532 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 0975826c-6016-48c8-a7dd-1b10a32f91ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:17:40 compute-0 nova_compute[188777]: 2026-02-19 20:17:40.533 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:17:40 compute-0 nova_compute[188777]: 2026-02-19 20:17:40.533 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:17:40 compute-0 nova_compute[188777]: 2026-02-19 20:17:40.582 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:17:40 compute-0 nova_compute[188777]: 2026-02-19 20:17:40.596 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:17:40 compute-0 nova_compute[188777]: 2026-02-19 20:17:40.598 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:17:40 compute-0 nova_compute[188777]: 2026-02-19 20:17:40.598 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.139s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:17:42 compute-0 nova_compute[188777]: 2026-02-19 20:17:42.737 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:17:42 compute-0 nova_compute[188777]: 2026-02-19 20:17:42.894 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:17:43 compute-0 nova_compute[188777]: 2026-02-19 20:17:43.458 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:17:46 compute-0 podman[244126]: 2026-02-19 20:17:46.429120997 +0000 UTC m=+0.095380552 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 19 20:17:46 compute-0 podman[244125]: 2026-02-19 20:17:46.466373263 +0000 UTC m=+0.130777861 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, vcs-type=git, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, io.buildah.version=1.33.7, io.openshift.expose-services=, version=9.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., name=ubi9/ubi-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Feb 19 20:17:47 compute-0 nova_compute[188777]: 2026-02-19 20:17:47.739 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:17:47 compute-0 nova_compute[188777]: 2026-02-19 20:17:47.897 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:17:51 compute-0 podman[244167]: 2026-02-19 20:17:51.368689394 +0000 UTC m=+0.057137254 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 19 20:17:52 compute-0 nova_compute[188777]: 2026-02-19 20:17:52.742 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:17:52 compute-0 nova_compute[188777]: 2026-02-19 20:17:52.899 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:17:54 compute-0 podman[244186]: 2026-02-19 20:17:54.412150676 +0000 UTC m=+0.088721605 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, container_name=kepler, managed_by=edpm_ansible, vcs-type=git, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.4, release=1214.1726694543, com.redhat.component=ubi9-container, architecture=x86_64, config_id=kepler, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc.)
Feb 19 20:17:54 compute-0 podman[244187]: 2026-02-19 20:17:54.41744522 +0000 UTC m=+0.099297083 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 19 20:17:57 compute-0 nova_compute[188777]: 2026-02-19 20:17:57.745 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:17:57 compute-0 nova_compute[188777]: 2026-02-19 20:17:57.902 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:17:58 compute-0 podman[244225]: 2026-02-19 20:17:58.406738981 +0000 UTC m=+0.091008276 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 19 20:17:59 compute-0 podman[204724]: time="2026-02-19T20:17:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:17:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:17:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 19 20:17:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:17:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4357 "" "Go-http-client/1.1"
Feb 19 20:17:59 compute-0 sshd-session[244224]: Received disconnect from 125.94.106.195 port 56842:11: Bye Bye [preauth]
Feb 19 20:17:59 compute-0 sshd-session[244224]: Disconnected from authenticating user root 125.94.106.195 port 56842 [preauth]
Feb 19 20:18:01 compute-0 openstack_network_exporter[207898]: ERROR   20:18:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:18:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:18:01 compute-0 openstack_network_exporter[207898]: ERROR   20:18:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:18:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:18:01 compute-0 podman[244250]: 2026-02-19 20:18:01.450842743 +0000 UTC m=+0.128461739 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Feb 19 20:18:02 compute-0 nova_compute[188777]: 2026-02-19 20:18:02.748 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:18:02 compute-0 nova_compute[188777]: 2026-02-19 20:18:02.903 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:18:06 compute-0 podman[244270]: 2026-02-19 20:18:06.422106775 +0000 UTC m=+0.113486505 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true)
Feb 19 20:18:07 compute-0 nova_compute[188777]: 2026-02-19 20:18:07.750 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:18:07 compute-0 nova_compute[188777]: 2026-02-19 20:18:07.906 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:18:12 compute-0 nova_compute[188777]: 2026-02-19 20:18:12.751 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:18:12 compute-0 nova_compute[188777]: 2026-02-19 20:18:12.907 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:18:17 compute-0 podman[244297]: 2026-02-19 20:18:17.43037158 +0000 UTC m=+0.113276258 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 19 20:18:17 compute-0 podman[244296]: 2026-02-19 20:18:17.434464967 +0000 UTC m=+0.121063070 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1770267347, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, config_id=openstack_network_exporter, vcs-type=git, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible)
Feb 19 20:18:17 compute-0 nova_compute[188777]: 2026-02-19 20:18:17.753 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:18:17 compute-0 nova_compute[188777]: 2026-02-19 20:18:17.910 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:18:22 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:18:22.133 108175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1e:ad:15', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:0d:ba:1d:25:53'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 19 20:18:22 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:18:22.135 108175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 19 20:18:22 compute-0 nova_compute[188777]: 2026-02-19 20:18:22.143 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:18:22 compute-0 podman[244339]: 2026-02-19 20:18:22.415243874 +0000 UTC m=+0.098519318 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Feb 19 20:18:22 compute-0 nova_compute[188777]: 2026-02-19 20:18:22.760 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:18:22 compute-0 nova_compute[188777]: 2026-02-19 20:18:22.913 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:18:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:18:25.141 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e2fe6bb6-fad0-4563-8388-215a30f03e3f, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:18:25 compute-0 podman[244358]: 2026-02-19 20:18:25.434577018 +0000 UTC m=+0.115632838 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, config_id=kepler, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.buildah.version=1.29.0, name=ubi9, io.openshift.tags=base rhel9, version=9.4, maintainer=Red Hat, Inc., release=1214.1726694543, architecture=x86_64)
Feb 19 20:18:25 compute-0 podman[244359]: 2026-02-19 20:18:25.456665771 +0000 UTC m=+0.131975953 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 19 20:18:27 compute-0 nova_compute[188777]: 2026-02-19 20:18:27.759 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:18:27 compute-0 nova_compute[188777]: 2026-02-19 20:18:27.766 188781 DEBUG oslo_concurrency.lockutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "14ed9fe0-b150-4bd8-852e-7f2f62d4374b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:18:27 compute-0 nova_compute[188777]: 2026-02-19 20:18:27.767 188781 DEBUG oslo_concurrency.lockutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "14ed9fe0-b150-4bd8-852e-7f2f62d4374b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:18:27 compute-0 nova_compute[188777]: 2026-02-19 20:18:27.789 188781 DEBUG nova.compute.manager [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 19 20:18:27 compute-0 nova_compute[188777]: 2026-02-19 20:18:27.877 188781 DEBUG oslo_concurrency.lockutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:18:27 compute-0 nova_compute[188777]: 2026-02-19 20:18:27.878 188781 DEBUG oslo_concurrency.lockutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:18:27 compute-0 nova_compute[188777]: 2026-02-19 20:18:27.889 188781 DEBUG nova.virt.hardware [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 19 20:18:27 compute-0 nova_compute[188777]: 2026-02-19 20:18:27.890 188781 INFO nova.compute.claims [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Claim successful on node compute-0.ctlplane.example.com
Feb 19 20:18:27 compute-0 nova_compute[188777]: 2026-02-19 20:18:27.916 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.078 188781 DEBUG nova.compute.provider_tree [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.097 188781 DEBUG nova.scheduler.client.report [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.132 188781 DEBUG oslo_concurrency.lockutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.254s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.134 188781 DEBUG nova.compute.manager [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.195 188781 DEBUG nova.compute.manager [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.196 188781 DEBUG nova.network.neutron [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.217 188781 INFO nova.virt.libvirt.driver [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.259 188781 DEBUG nova.compute.manager [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.364 188781 DEBUG nova.compute.manager [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.377 188781 DEBUG nova.virt.libvirt.driver [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.378 188781 INFO nova.virt.libvirt.driver [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Creating image(s)
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.380 188781 DEBUG oslo_concurrency.lockutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "/var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.381 188781 DEBUG oslo_concurrency.lockutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "/var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.383 188781 DEBUG oslo_concurrency.lockutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "/var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.399 188781 DEBUG oslo_concurrency.processutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.448 188781 DEBUG oslo_concurrency.processutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.449 188781 DEBUG oslo_concurrency.lockutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "ab3f72be2a6a58a25574f1d71543e651d74a575a" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.450 188781 DEBUG oslo_concurrency.lockutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "ab3f72be2a6a58a25574f1d71543e651d74a575a" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.464 188781 DEBUG oslo_concurrency.processutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.519 188781 DEBUG oslo_concurrency.processutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.521 188781 DEBUG oslo_concurrency.processutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a,backing_fmt=raw /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.573 188781 DEBUG oslo_concurrency.processutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a,backing_fmt=raw /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk 1073741824" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.575 188781 DEBUG oslo_concurrency.lockutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "ab3f72be2a6a58a25574f1d71543e651d74a575a" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.576 188781 DEBUG oslo_concurrency.processutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.624 188781 DEBUG oslo_concurrency.processutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.627 188781 DEBUG nova.virt.disk.api [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Checking if we can resize image /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.628 188781 DEBUG oslo_concurrency.processutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.681 188781 DEBUG oslo_concurrency.processutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.683 188781 DEBUG nova.virt.disk.api [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Cannot resize image /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.684 188781 DEBUG nova.objects.instance [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lazy-loading 'migration_context' on Instance uuid 14ed9fe0-b150-4bd8-852e-7f2f62d4374b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.704 188781 DEBUG oslo_concurrency.lockutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "/var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.705 188781 DEBUG oslo_concurrency.lockutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "/var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.707 188781 DEBUG oslo_concurrency.lockutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "/var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.735 188781 DEBUG oslo_concurrency.processutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.797 188781 DEBUG oslo_concurrency.processutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.799 188781 DEBUG oslo_concurrency.lockutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.800 188781 DEBUG oslo_concurrency.lockutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.816 188781 DEBUG oslo_concurrency.processutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.867 188781 DEBUG oslo_concurrency.processutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.869 188781 DEBUG oslo_concurrency.processutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.923 188781 DEBUG oslo_concurrency.processutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.eph0 1073741824" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.925 188781 DEBUG oslo_concurrency.lockutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.926 188781 DEBUG oslo_concurrency.processutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.972 188781 DEBUG oslo_concurrency.processutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.973 188781 DEBUG nova.virt.libvirt.driver [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.974 188781 DEBUG nova.virt.libvirt.driver [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Ensure instance console log exists: /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.974 188781 DEBUG oslo_concurrency.lockutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.975 188781 DEBUG oslo_concurrency.lockutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:18:28 compute-0 nova_compute[188777]: 2026-02-19 20:18:28.975 188781 DEBUG oslo_concurrency.lockutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:18:29 compute-0 podman[244421]: 2026-02-19 20:18:29.398900209 +0000 UTC m=+0.082062819 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 19 20:18:29 compute-0 podman[204724]: time="2026-02-19T20:18:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:18:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:18:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 19 20:18:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:18:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4354 "" "Go-http-client/1.1"
Feb 19 20:18:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:18:30.429 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:18:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:18:30.430 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:18:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:18:30.431 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:18:31 compute-0 openstack_network_exporter[207898]: ERROR   20:18:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:18:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:18:31 compute-0 openstack_network_exporter[207898]: ERROR   20:18:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:18:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:18:32 compute-0 nova_compute[188777]: 2026-02-19 20:18:32.204 188781 DEBUG nova.network.neutron [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Successfully updated port: 9838caff-8a65-491d-8b0d-3fb3d10c299c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 19 20:18:32 compute-0 nova_compute[188777]: 2026-02-19 20:18:32.219 188781 DEBUG oslo_concurrency.lockutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "refresh_cache-14ed9fe0-b150-4bd8-852e-7f2f62d4374b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:18:32 compute-0 nova_compute[188777]: 2026-02-19 20:18:32.220 188781 DEBUG oslo_concurrency.lockutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquired lock "refresh_cache-14ed9fe0-b150-4bd8-852e-7f2f62d4374b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:18:32 compute-0 nova_compute[188777]: 2026-02-19 20:18:32.220 188781 DEBUG nova.network.neutron [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 19 20:18:32 compute-0 podman[244445]: 2026-02-19 20:18:32.416006464 +0000 UTC m=+0.099376854 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260216, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 19 20:18:32 compute-0 nova_compute[188777]: 2026-02-19 20:18:32.762 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:18:32 compute-0 nova_compute[188777]: 2026-02-19 20:18:32.918 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:18:33 compute-0 nova_compute[188777]: 2026-02-19 20:18:33.144 188781 DEBUG nova.network.neutron [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 19 20:18:33 compute-0 nova_compute[188777]: 2026-02-19 20:18:33.266 188781 DEBUG nova.compute.manager [req-6e976bc2-613e-467f-8da5-97c2af0ace9f req-b9565e16-6027-4312-a554-002584468d40 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Received event network-changed-9838caff-8a65-491d-8b0d-3fb3d10c299c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:18:33 compute-0 nova_compute[188777]: 2026-02-19 20:18:33.267 188781 DEBUG nova.compute.manager [req-6e976bc2-613e-467f-8da5-97c2af0ace9f req-b9565e16-6027-4312-a554-002584468d40 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Refreshing instance network info cache due to event network-changed-9838caff-8a65-491d-8b0d-3fb3d10c299c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 19 20:18:33 compute-0 nova_compute[188777]: 2026-02-19 20:18:33.268 188781 DEBUG oslo_concurrency.lockutils [req-6e976bc2-613e-467f-8da5-97c2af0ace9f req-b9565e16-6027-4312-a554-002584468d40 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "refresh_cache-14ed9fe0-b150-4bd8-852e-7f2f62d4374b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:18:33 compute-0 sshd-session[244465]: Invalid user admin from 83.235.16.111 port 35298
Feb 19 20:18:33 compute-0 sshd-session[244465]: Received disconnect from 83.235.16.111 port 35298:11: Bye Bye [preauth]
Feb 19 20:18:33 compute-0 sshd-session[244465]: Disconnected from invalid user admin 83.235.16.111 port 35298 [preauth]
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.397 188781 DEBUG nova.network.neutron [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Updating instance_info_cache with network_info: [{"id": "9838caff-8a65-491d-8b0d-3fb3d10c299c", "address": "fa:16:3e:9c:8b:13", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.86", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9838caff-8a", "ovs_interfaceid": "9838caff-8a65-491d-8b0d-3fb3d10c299c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.429 188781 DEBUG oslo_concurrency.lockutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Releasing lock "refresh_cache-14ed9fe0-b150-4bd8-852e-7f2f62d4374b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.430 188781 DEBUG nova.compute.manager [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Instance network_info: |[{"id": "9838caff-8a65-491d-8b0d-3fb3d10c299c", "address": "fa:16:3e:9c:8b:13", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.86", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9838caff-8a", "ovs_interfaceid": "9838caff-8a65-491d-8b0d-3fb3d10c299c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.432 188781 DEBUG oslo_concurrency.lockutils [req-6e976bc2-613e-467f-8da5-97c2af0ace9f req-b9565e16-6027-4312-a554-002584468d40 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquired lock "refresh_cache-14ed9fe0-b150-4bd8-852e-7f2f62d4374b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.433 188781 DEBUG nova.network.neutron [req-6e976bc2-613e-467f-8da5-97c2af0ace9f req-b9565e16-6027-4312-a554-002584468d40 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Refreshing network info cache for port 9838caff-8a65-491d-8b0d-3fb3d10c299c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.439 188781 DEBUG nova.virt.libvirt.driver [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Start _get_guest_xml network_info=[{"id": "9838caff-8a65-491d-8b0d-3fb3d10c299c", "address": "fa:16:3e:9c:8b:13", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.86", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9838caff-8a", "ovs_interfaceid": "9838caff-8a65-491d-8b0d-3fb3d10c299c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-02-19T20:11:25Z,direct_url=<?>,disk_format='qcow2',id=e1a79c75-2fa3-410d-9c4c-91db3eeca51d,min_disk=0,min_ram=0,name='cirros',owner='59f01dee51a74ac1a9f82733f591827d',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-02-19T20:11:26Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'encryption_secret_uuid': None, 'image_id': 'e1a79c75-2fa3-410d-9c4c-91db3eeca51d'}], 'ephemerals': [{'device_name': '/dev/vdb', 'guest_format': None, 'size': 1, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'encryption_options': None, 'encryption_secret_uuid': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.450 188781 WARNING nova.virt.libvirt.driver [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.468 188781 DEBUG nova.virt.libvirt.host [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.470 188781 DEBUG nova.virt.libvirt.host [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.477 188781 DEBUG nova.virt.libvirt.host [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.478 188781 DEBUG nova.virt.libvirt.host [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.480 188781 DEBUG nova.virt.libvirt.driver [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.481 188781 DEBUG nova.virt.hardware [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-19T20:11:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='8030bc1a-9afb-4678-ac07-8b59a1275925',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-02-19T20:11:25Z,direct_url=<?>,disk_format='qcow2',id=e1a79c75-2fa3-410d-9c4c-91db3eeca51d,min_disk=0,min_ram=0,name='cirros',owner='59f01dee51a74ac1a9f82733f591827d',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-02-19T20:11:26Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.483 188781 DEBUG nova.virt.hardware [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.484 188781 DEBUG nova.virt.hardware [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.486 188781 DEBUG nova.virt.hardware [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.487 188781 DEBUG nova.virt.hardware [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.489 188781 DEBUG nova.virt.hardware [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.490 188781 DEBUG nova.virt.hardware [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.491 188781 DEBUG nova.virt.hardware [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.493 188781 DEBUG nova.virt.hardware [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.494 188781 DEBUG nova.virt.hardware [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.495 188781 DEBUG nova.virt.hardware [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.501 188781 DEBUG nova.virt.libvirt.vif [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-19T20:18:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-h4amqsx-zdyrztqs2ra5-eeiurm4z7i6z-vnf-hs7qdifsqkdp',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-h4amqsx-zdyrztqs2ra5-eeiurm4z7i6z-vnf-hs7qdifsqkdp',id=3,image_ref='e1a79c75-2fa3-410d-9c4c-91db3eeca51d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='78adc0ea-8772-4283-8bd6-6dbdcecee09e'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='59f01dee51a74ac1a9f82733f591827d',ramdisk_id='',reservation_id='r-e6veut89',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='e1a79c75-2fa3-410d-9c4c-91db3eeca51d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-19T20:18:28Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01Nzg4ODk4MTMyMTczNzU1NDA0PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTU3ODg4OTgxMzIxNzM3NTU0MDQ9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTc4ODg5ODEzMjE3Mzc1NTQwND09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTU3ODg4OTgxMzIxNzM3NTU0MDQ9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01Nzg4ODk4MTMyMTczNzU1NDA0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01Nzg4ODk4MTMyMTczNzU1NDA0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Feb 19 20:18:35 compute-0 nova_compute[188777]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTc4ODg5ODEzMjE3Mzc1NTQwND09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTU3ODg4OTgxMzIxNzM3NTU0MDQ9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01Nzg4ODk4MTMyMTczNzU1NDA0PT0tLQo=',user_id='9f5597a45dc34ee19bcfe938afde768f',uuid=14ed9fe0-b150-4bd8-852e-7f2f62d4374b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9838caff-8a65-491d-8b0d-3fb3d10c299c", "address": "fa:16:3e:9c:8b:13", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.86", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9838caff-8a", "ovs_interfaceid": "9838caff-8a65-491d-8b0d-3fb3d10c299c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.502 188781 DEBUG nova.network.os_vif_util [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Converting VIF {"id": "9838caff-8a65-491d-8b0d-3fb3d10c299c", "address": "fa:16:3e:9c:8b:13", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.86", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9838caff-8a", "ovs_interfaceid": "9838caff-8a65-491d-8b0d-3fb3d10c299c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.503 188781 DEBUG nova.network.os_vif_util [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:8b:13,bridge_name='br-int',has_traffic_filtering=True,id=9838caff-8a65-491d-8b0d-3fb3d10c299c,network=Network(ec82c3b7-5389-43ab-a939-ce6cd12f9681),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9838caff-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.504 188781 DEBUG nova.objects.instance [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lazy-loading 'pci_devices' on Instance uuid 14ed9fe0-b150-4bd8-852e-7f2f62d4374b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.520 188781 DEBUG nova.virt.libvirt.driver [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] End _get_guest_xml xml=<domain type="kvm">
Feb 19 20:18:35 compute-0 nova_compute[188777]:   <uuid>14ed9fe0-b150-4bd8-852e-7f2f62d4374b</uuid>
Feb 19 20:18:35 compute-0 nova_compute[188777]:   <name>instance-00000003</name>
Feb 19 20:18:35 compute-0 nova_compute[188777]:   <memory>524288</memory>
Feb 19 20:18:35 compute-0 nova_compute[188777]:   <vcpu>1</vcpu>
Feb 19 20:18:35 compute-0 nova_compute[188777]:   <metadata>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 19 20:18:35 compute-0 nova_compute[188777]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:       <nova:name>vn-h4amqsx-zdyrztqs2ra5-eeiurm4z7i6z-vnf-hs7qdifsqkdp</nova:name>
Feb 19 20:18:35 compute-0 nova_compute[188777]:       <nova:creationTime>2026-02-19 20:18:35</nova:creationTime>
Feb 19 20:18:35 compute-0 nova_compute[188777]:       <nova:flavor name="m1.small">
Feb 19 20:18:35 compute-0 nova_compute[188777]:         <nova:memory>512</nova:memory>
Feb 19 20:18:35 compute-0 nova_compute[188777]:         <nova:disk>1</nova:disk>
Feb 19 20:18:35 compute-0 nova_compute[188777]:         <nova:swap>0</nova:swap>
Feb 19 20:18:35 compute-0 nova_compute[188777]:         <nova:ephemeral>1</nova:ephemeral>
Feb 19 20:18:35 compute-0 nova_compute[188777]:         <nova:vcpus>1</nova:vcpus>
Feb 19 20:18:35 compute-0 nova_compute[188777]:       </nova:flavor>
Feb 19 20:18:35 compute-0 nova_compute[188777]:       <nova:owner>
Feb 19 20:18:35 compute-0 nova_compute[188777]:         <nova:user uuid="9f5597a45dc34ee19bcfe938afde768f">admin</nova:user>
Feb 19 20:18:35 compute-0 nova_compute[188777]:         <nova:project uuid="59f01dee51a74ac1a9f82733f591827d">admin</nova:project>
Feb 19 20:18:35 compute-0 nova_compute[188777]:       </nova:owner>
Feb 19 20:18:35 compute-0 nova_compute[188777]:       <nova:root type="image" uuid="e1a79c75-2fa3-410d-9c4c-91db3eeca51d"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:       <nova:ports>
Feb 19 20:18:35 compute-0 nova_compute[188777]:         <nova:port uuid="9838caff-8a65-491d-8b0d-3fb3d10c299c">
Feb 19 20:18:35 compute-0 nova_compute[188777]:           <nova:ip type="fixed" address="192.168.0.86" ipVersion="4"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:         </nova:port>
Feb 19 20:18:35 compute-0 nova_compute[188777]:       </nova:ports>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     </nova:instance>
Feb 19 20:18:35 compute-0 nova_compute[188777]:   </metadata>
Feb 19 20:18:35 compute-0 nova_compute[188777]:   <sysinfo type="smbios">
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <system>
Feb 19 20:18:35 compute-0 nova_compute[188777]:       <entry name="manufacturer">RDO</entry>
Feb 19 20:18:35 compute-0 nova_compute[188777]:       <entry name="product">OpenStack Compute</entry>
Feb 19 20:18:35 compute-0 nova_compute[188777]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 19 20:18:35 compute-0 nova_compute[188777]:       <entry name="serial">14ed9fe0-b150-4bd8-852e-7f2f62d4374b</entry>
Feb 19 20:18:35 compute-0 nova_compute[188777]:       <entry name="uuid">14ed9fe0-b150-4bd8-852e-7f2f62d4374b</entry>
Feb 19 20:18:35 compute-0 nova_compute[188777]:       <entry name="family">Virtual Machine</entry>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     </system>
Feb 19 20:18:35 compute-0 nova_compute[188777]:   </sysinfo>
Feb 19 20:18:35 compute-0 nova_compute[188777]:   <os>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <boot dev="hd"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <smbios mode="sysinfo"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:   </os>
Feb 19 20:18:35 compute-0 nova_compute[188777]:   <features>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <acpi/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <apic/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <vmcoreinfo/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:   </features>
Feb 19 20:18:35 compute-0 nova_compute[188777]:   <clock offset="utc">
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <timer name="pit" tickpolicy="delay"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <timer name="hpet" present="no"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:   </clock>
Feb 19 20:18:35 compute-0 nova_compute[188777]:   <cpu mode="host-model" match="exact">
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <topology sockets="1" cores="1" threads="1"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:   </cpu>
Feb 19 20:18:35 compute-0 nova_compute[188777]:   <devices>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <disk type="file" device="disk">
Feb 19 20:18:35 compute-0 nova_compute[188777]:       <driver name="qemu" type="qcow2" cache="none"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:       <source file="/var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:       <target dev="vda" bus="virtio"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     </disk>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <disk type="file" device="disk">
Feb 19 20:18:35 compute-0 nova_compute[188777]:       <driver name="qemu" type="qcow2" cache="none"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:       <source file="/var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.eph0"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:       <target dev="vdb" bus="virtio"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     </disk>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <disk type="file" device="cdrom">
Feb 19 20:18:35 compute-0 nova_compute[188777]:       <driver name="qemu" type="raw" cache="none"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:       <source file="/var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.config"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:       <target dev="sda" bus="sata"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     </disk>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <interface type="ethernet">
Feb 19 20:18:35 compute-0 nova_compute[188777]:       <mac address="fa:16:3e:9c:8b:13"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:       <model type="virtio"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:       <driver name="vhost" rx_queue_size="512"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:       <mtu size="1442"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:       <target dev="tap9838caff-8a"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     </interface>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <serial type="pty">
Feb 19 20:18:35 compute-0 nova_compute[188777]:       <log file="/var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/console.log" append="off"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     </serial>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <video>
Feb 19 20:18:35 compute-0 nova_compute[188777]:       <model type="virtio"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     </video>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <input type="tablet" bus="usb"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <rng model="virtio">
Feb 19 20:18:35 compute-0 nova_compute[188777]:       <backend model="random">/dev/urandom</backend>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     </rng>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <controller type="usb" index="0"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     <memballoon model="virtio">
Feb 19 20:18:35 compute-0 nova_compute[188777]:       <stats period="10"/>
Feb 19 20:18:35 compute-0 nova_compute[188777]:     </memballoon>
Feb 19 20:18:35 compute-0 nova_compute[188777]:   </devices>
Feb 19 20:18:35 compute-0 nova_compute[188777]: </domain>
Feb 19 20:18:35 compute-0 nova_compute[188777]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.530 188781 DEBUG nova.compute.manager [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Preparing to wait for external event network-vif-plugged-9838caff-8a65-491d-8b0d-3fb3d10c299c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.530 188781 DEBUG oslo_concurrency.lockutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "14ed9fe0-b150-4bd8-852e-7f2f62d4374b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.531 188781 DEBUG oslo_concurrency.lockutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "14ed9fe0-b150-4bd8-852e-7f2f62d4374b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.531 188781 DEBUG oslo_concurrency.lockutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "14ed9fe0-b150-4bd8-852e-7f2f62d4374b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.532 188781 DEBUG nova.virt.libvirt.vif [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-19T20:18:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-h4amqsx-zdyrztqs2ra5-eeiurm4z7i6z-vnf-hs7qdifsqkdp',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-h4amqsx-zdyrztqs2ra5-eeiurm4z7i6z-vnf-hs7qdifsqkdp',id=3,image_ref='e1a79c75-2fa3-410d-9c4c-91db3eeca51d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='78adc0ea-8772-4283-8bd6-6dbdcecee09e'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='59f01dee51a74ac1a9f82733f591827d',ramdisk_id='',reservation_id='r-e6veut89',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='e1a79c75-2fa3-410d-9c4c-91db3eeca51d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-19T20:18:28Z,user_data='Content-Type: multipart/mixed; boundary="===============5788898132173755404=="
MIME-Version: 1.0

--===============5788898132173755404==
Content-Type: text/cloud-config; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cloud-config"



# Capture all subprocess output into a logfile
# Useful for troubleshooting cloud-init issues
output: {all: '| tee -a /var/log/cloud-init-output.log'}

--===============5788898132173755404==
Content-Type: text/cloud-boothook; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="boothook.sh"

#!/usr/bin/bash

# FIXME(shadower) this is a workaround for cloud-init 0.6.3 present in Ubuntu
# 12.04 LTS:
# https://bugs.launchpad.net/heat/+bug/1257410
#
# The old cloud-init doesn't create the users directly so the commands to do
# this are injected though nova_utils.py.
#
# Once we drop support for 0.6.3, we can safely remove this.


# in case heat-cfntools has been installed from package but no symlinks
# are yet in /opt/aws/bin/
cfn-create-aws-symlinks

# Do not remove - the cloud boothook should always return success
exit 0

--===============5788898132173755404==
Content-Type: text/part-handler; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="part-handler.py"

# part-handler
#
#    Licensed under the Apache License, Version 2.0 (the "License"); you may
#    not use this file except in compliance with the License. You may obtain
#    a copy of the License at
#
#         http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
#    License for the specific language governing permissions and limitations
#    under the License.

import datetime
import errno
import os
import sys


def list_types():
    return ["text/x-cfninitdata"]


def handle_part(data, ctype, filename, payload):
    if ctype == "__begin__":
        try:
            os.makedirs('/var/lib/heat-cfntools', int("700", 8))
        except OSError:
            ex_type, e, tb = sys.exc_info()
            if e.errno != errno.EEXIST:
                raise
        return

    if ctype == "__end__":
        return

    timestamp = datetime.datetime.now()
    with open('/var/log/part-handler.log', 'a') as log:
        log.write('%s filename:%s, ctype:%s\n' % (timestamp, filename, ctype))

    if ctype == 'text/x-cfninitdata':
        with open('/var/lib/heat-cfntools/%s' % filename, 'w') as f:
            f.write(payload)

        # TODO(sdake) hopefully temporary until users move to heat-cfntools-1.3
        with open('/var/lib/cloud/data/%s' % filename, 'w') as f:
            f.write(payload)

--===============5788898132173755404==
Content-Type: text/x-cfninitdata; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cfn-userdata"


--===============5788898132173755404==
Content-Type: text/x-shellscript; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="loguserdata.py"

#!/usr/bin/env python3
#
#    Licensed under the Apache License, Version 2.0 (the "License"); you may
#    not use this file except in compliance with the License. You may obtain
#    a copy of the License at
#
#         http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
#    License for the specific language governing permissions and limitations
#    under the License.

import datetime
import errno
import logging
import os
import subprocess
import sys


VAR_PATH = '/var/lib/heat-cfntools'
LOG = logging.getLogger('heat-provision')


def init_logging():
    LOG.setLevel(logging.INFO)
    LOG.addHandler(logging.StreamHandler())
    fh = logging.FileHandler("/var/log/heat-provision.log")
    os.chmod(fh.baseFilename, int("600", 8))
    LOG.addHandler(fh)


def call(args):

    class LogStream(object):

        def write(self, data):
            LOG.info(data)

    LOG.info('%s\n', ' '.join(args))  # noqa
    try:
        ls = LogStream()
        p = subprocess.Popen(args, stdout=subprocess.PIPE,
                             stderr=subprocess.PIPE)
        data = p.communicate()
        if data:
            for x in data:
                ls.write(x)
    except OSError:
        ex_type, ex, tb = sys.exc_info()
        if ex.errno == errno.ENOEXEC:
            LOG.error('Userdata empty or not executable: %s', ex)
            return os.EX_OK
        else:
            LOG.error('OS error running userdata: %s', ex)
            return os.EX_OSERR
    except Exception:
        ex_type, ex, tb = sys.exc_info()
        LOG.error('Unknown error running userdata: %s', ex)
        return os.EX_SOFTWARE
    return p.returncode


def main():
    userdata_path = os.path.join(VAR_PATH, 'cfn-userdata')
    os.chmod(userdata_path, int("700", 8))

    LOG.info('Provision began: %s', datetime.datetime.now())
    returncode = call([userdata_path])
    LOG.info('Provision done: %s', datetime.datetime.now())
    if returncode:
        return returncode


if __name__ == '__main__':
    init_logging()

    code = main()
    if code:
        LOG.error('Provision failed with exit code %s', code)
        sys.exit(code)

    provision_log = os.path.join(VAR_PATH, 'provision-finished')
    # touch the file so it is timestamped with when finished
    with open(provision_log, 'a'):
        os.utime(provision_log, None)

--===============5788898132173755404==
Content-Type: text/x-cfninitdata; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cfn-metadata-server"

https://heat-cfnapi-internal.openstack.svc:8000/v1/
--===============5788898132173755404==
Content-Type: text/x-cfninitdata; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cfn-boto-cfg"

[Boto]
debug = 0
is_secure = 0
https_validate_certificates = 1
cfn_region_name = heat
cfn_region_endpoint = heat-cfnapi-internal.openstack.svc
--===============5788898132173755404==--
',user_id='9f5597a45dc34ee19bcfe938afde768f',uuid=14ed9fe0-b150-4bd8-852e-7f2f62d4374b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9838caff-8a65-491d-8b0d-3fb3d10c299c", "address": "fa:16:3e:9c:8b:13", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.86", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9838caff-8a", "ovs_interfaceid": "9838caff-8a65-491d-8b0d-3fb3d10c299c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.532 188781 DEBUG nova.network.os_vif_util [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Converting VIF {"id": "9838caff-8a65-491d-8b0d-3fb3d10c299c", "address": "fa:16:3e:9c:8b:13", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.86", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9838caff-8a", "ovs_interfaceid": "9838caff-8a65-491d-8b0d-3fb3d10c299c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.533 188781 DEBUG nova.network.os_vif_util [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:8b:13,bridge_name='br-int',has_traffic_filtering=True,id=9838caff-8a65-491d-8b0d-3fb3d10c299c,network=Network(ec82c3b7-5389-43ab-a939-ce6cd12f9681),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9838caff-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.533 188781 DEBUG os_vif [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:8b:13,bridge_name='br-int',has_traffic_filtering=True,id=9838caff-8a65-491d-8b0d-3fb3d10c299c,network=Network(ec82c3b7-5389-43ab-a939-ce6cd12f9681),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9838caff-8a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.534 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.534 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.534 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.540 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.540 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9838caff-8a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.541 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9838caff-8a, col_values=(('external_ids', {'iface-id': '9838caff-8a65-491d-8b0d-3fb3d10c299c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9c:8b:13', 'vm-uuid': '14ed9fe0-b150-4bd8-852e-7f2f62d4374b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.542 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:18:35 compute-0 NetworkManager[57033]: <info>  [1771532315.5439] manager: (tap9838caff-8a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.544 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.554 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.554 188781 INFO os_vif [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:8b:13,bridge_name='br-int',has_traffic_filtering=True,id=9838caff-8a65-491d-8b0d-3fb3d10c299c,network=Network(ec82c3b7-5389-43ab-a939-ce6cd12f9681),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9838caff-8a')
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.599 188781 DEBUG nova.virt.libvirt.driver [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.600 188781 DEBUG nova.virt.libvirt.driver [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.600 188781 DEBUG nova.virt.libvirt.driver [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.600 188781 DEBUG nova.virt.libvirt.driver [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] No VIF found with MAC fa:16:3e:9c:8b:13, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 19 20:18:35 compute-0 nova_compute[188777]: 2026-02-19 20:18:35.600 188781 INFO nova.virt.libvirt.driver [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Using config drive
Feb 19 20:18:35 compute-0 rsyslogd[239379]: message too long (8192) with configured size 8096, begin of message is: 2026-02-19 20:18:35.501 188781 DEBUG nova.virt.libvirt.vif [None req-b0f74b3f-6c [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 19 20:18:36 compute-0 nova_compute[188777]: 2026-02-19 20:18:36.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:18:36 compute-0 nova_compute[188777]: 2026-02-19 20:18:36.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:18:36 compute-0 nova_compute[188777]: 2026-02-19 20:18:36.264 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:18:36 compute-0 nova_compute[188777]: 2026-02-19 20:18:36.288 188781 INFO nova.virt.libvirt.driver [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Creating config drive at /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.config
Feb 19 20:18:36 compute-0 nova_compute[188777]: 2026-02-19 20:18:36.295 188781 DEBUG oslo_concurrency.processutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpv6v_njoi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:18:36 compute-0 nova_compute[188777]: 2026-02-19 20:18:36.422 188781 DEBUG oslo_concurrency.processutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpv6v_njoi" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:18:36 compute-0 kernel: tap9838caff-8a: entered promiscuous mode
Feb 19 20:18:36 compute-0 NetworkManager[57033]: <info>  [1771532316.5182] manager: (tap9838caff-8a): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Feb 19 20:18:36 compute-0 ovn_controller[98843]: 2026-02-19T20:18:36Z|00040|binding|INFO|Claiming lport 9838caff-8a65-491d-8b0d-3fb3d10c299c for this chassis.
Feb 19 20:18:36 compute-0 ovn_controller[98843]: 2026-02-19T20:18:36Z|00041|binding|INFO|9838caff-8a65-491d-8b0d-3fb3d10c299c: Claiming fa:16:3e:9c:8b:13 192.168.0.86
Feb 19 20:18:36 compute-0 nova_compute[188777]: 2026-02-19 20:18:36.521 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:18:36 compute-0 nova_compute[188777]: 2026-02-19 20:18:36.524 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:18:36 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:18:36.531 108175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:8b:13 192.168.0.86'], port_security=['fa:16:3e:9c:8b:13 192.168.0.86'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-5pf5gh4amqsx-zdyrztqs2ra5-eeiurm4z7i6z-port-lze5z7eiiyr5', 'neutron:cidrs': '192.168.0.86/24', 'neutron:device_id': '14ed9fe0-b150-4bd8-852e-7f2f62d4374b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec82c3b7-5389-43ab-a939-ce6cd12f9681', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-5pf5gh4amqsx-zdyrztqs2ra5-eeiurm4z7i6z-port-lze5z7eiiyr5', 'neutron:project_id': '59f01dee51a74ac1a9f82733f591827d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '46d7cf50-a73c-415e-96c4-398ffee7ce2d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.207'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61958255-2fb8-4c55-809a-ee04d4cf034a, chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>], logical_port=9838caff-8a65-491d-8b0d-3fb3d10c299c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 19 20:18:36 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:18:36.533 108175 INFO neutron.agent.ovn.metadata.agent [-] Port 9838caff-8a65-491d-8b0d-3fb3d10c299c in datapath ec82c3b7-5389-43ab-a939-ce6cd12f9681 bound to our chassis
Feb 19 20:18:36 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:18:36.535 108175 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec82c3b7-5389-43ab-a939-ce6cd12f9681
Feb 19 20:18:36 compute-0 ovn_controller[98843]: 2026-02-19T20:18:36Z|00042|binding|INFO|Setting lport 9838caff-8a65-491d-8b0d-3fb3d10c299c ovn-installed in OVS
Feb 19 20:18:36 compute-0 ovn_controller[98843]: 2026-02-19T20:18:36Z|00043|binding|INFO|Setting lport 9838caff-8a65-491d-8b0d-3fb3d10c299c up in Southbound
Feb 19 20:18:36 compute-0 nova_compute[188777]: 2026-02-19 20:18:36.539 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:18:36 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:18:36.551 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[c5f6952f-92a9-4d6f-871a-d5b2257a2f63]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:18:36 compute-0 systemd-udevd[244506]: Network interface NamePolicy= disabled on kernel command line.
Feb 19 20:18:36 compute-0 systemd-machined[158158]: New machine qemu-3-instance-00000003.
Feb 19 20:18:36 compute-0 NetworkManager[57033]: <info>  [1771532316.5815] device (tap9838caff-8a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 19 20:18:36 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Feb 19 20:18:36 compute-0 NetworkManager[57033]: <info>  [1771532316.5862] device (tap9838caff-8a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 19 20:18:36 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:18:36.587 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[435de9c2-ec94-4446-86ef-a265056cfacf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:18:36 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:18:36.593 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[962a3089-9471-405d-ada0-01aa4282679f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:18:36 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:18:36.614 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[add61292-b378-43b4-b62a-23631bbf2b46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:18:36 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:18:36.628 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[69567f3b-95d6-40eb-88ba-173a84030cf2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec82c3b7-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8a:e7:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 8, 'rx_bytes': 574, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 8, 'rx_bytes': 574, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 348344, 'reachable_time': 39058, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244522, 'error': None, 'target': 'ovnmeta-ec82c3b7-5389-43ab-a939-ce6cd12f9681', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:18:36 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:18:36.638 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[cba11db8-7b57-42e3-9fea-99e37f67b5d9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapec82c3b7-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 348361, 'tstamp': 348361}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244525, 'error': None, 'target': 'ovnmeta-ec82c3b7-5389-43ab-a939-ce6cd12f9681', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapec82c3b7-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 348365, 'tstamp': 348365}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244525, 'error': None, 'target': 'ovnmeta-ec82c3b7-5389-43ab-a939-ce6cd12f9681', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:18:36 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:18:36.640 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec82c3b7-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:18:36 compute-0 nova_compute[188777]: 2026-02-19 20:18:36.641 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:18:36 compute-0 nova_compute[188777]: 2026-02-19 20:18:36.643 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:18:36 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:18:36.643 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec82c3b7-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:18:36 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:18:36.644 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 19 20:18:36 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:18:36.644 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec82c3b7-50, col_values=(('external_ids', {'iface-id': 'a1c774de-4b7d-47b5-b88c-3f5d9b5c3dce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:18:36 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:18:36.644 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 19 20:18:36 compute-0 podman[244479]: 2026-02-19 20:18:36.652479363 +0000 UTC m=+0.155198480 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 19 20:18:37 compute-0 nova_compute[188777]: 2026-02-19 20:18:37.030 188781 DEBUG nova.virt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Emitting event <LifecycleEvent: 1771532317.0298948, 14ed9fe0-b150-4bd8-852e-7f2f62d4374b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:18:37 compute-0 nova_compute[188777]: 2026-02-19 20:18:37.031 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] VM Started (Lifecycle Event)
Feb 19 20:18:37 compute-0 nova_compute[188777]: 2026-02-19 20:18:37.059 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:18:37 compute-0 nova_compute[188777]: 2026-02-19 20:18:37.067 188781 DEBUG nova.virt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Emitting event <LifecycleEvent: 1771532317.030207, 14ed9fe0-b150-4bd8-852e-7f2f62d4374b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:18:37 compute-0 nova_compute[188777]: 2026-02-19 20:18:37.067 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] VM Paused (Lifecycle Event)
Feb 19 20:18:37 compute-0 nova_compute[188777]: 2026-02-19 20:18:37.096 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:18:37 compute-0 nova_compute[188777]: 2026-02-19 20:18:37.102 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 19 20:18:37 compute-0 nova_compute[188777]: 2026-02-19 20:18:37.125 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 19 20:18:37 compute-0 nova_compute[188777]: 2026-02-19 20:18:37.260 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:18:37 compute-0 nova_compute[188777]: 2026-02-19 20:18:37.396 188781 DEBUG nova.compute.manager [req-3a451707-7f4b-4790-99d7-2ae82ace2307 req-e5309ec0-4c74-4ab5-9cd3-fc4df4ee7b39 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Received event network-vif-plugged-9838caff-8a65-491d-8b0d-3fb3d10c299c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:18:37 compute-0 nova_compute[188777]: 2026-02-19 20:18:37.399 188781 DEBUG oslo_concurrency.lockutils [req-3a451707-7f4b-4790-99d7-2ae82ace2307 req-e5309ec0-4c74-4ab5-9cd3-fc4df4ee7b39 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "14ed9fe0-b150-4bd8-852e-7f2f62d4374b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:18:37 compute-0 nova_compute[188777]: 2026-02-19 20:18:37.400 188781 DEBUG oslo_concurrency.lockutils [req-3a451707-7f4b-4790-99d7-2ae82ace2307 req-e5309ec0-4c74-4ab5-9cd3-fc4df4ee7b39 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "14ed9fe0-b150-4bd8-852e-7f2f62d4374b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:18:37 compute-0 nova_compute[188777]: 2026-02-19 20:18:37.401 188781 DEBUG oslo_concurrency.lockutils [req-3a451707-7f4b-4790-99d7-2ae82ace2307 req-e5309ec0-4c74-4ab5-9cd3-fc4df4ee7b39 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "14ed9fe0-b150-4bd8-852e-7f2f62d4374b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:18:37 compute-0 nova_compute[188777]: 2026-02-19 20:18:37.404 188781 DEBUG nova.compute.manager [req-3a451707-7f4b-4790-99d7-2ae82ace2307 req-e5309ec0-4c74-4ab5-9cd3-fc4df4ee7b39 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Processing event network-vif-plugged-9838caff-8a65-491d-8b0d-3fb3d10c299c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 19 20:18:37 compute-0 nova_compute[188777]: 2026-02-19 20:18:37.406 188781 DEBUG nova.compute.manager [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 19 20:18:37 compute-0 nova_compute[188777]: 2026-02-19 20:18:37.411 188781 DEBUG nova.virt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Emitting event <LifecycleEvent: 1771532317.4109066, 14ed9fe0-b150-4bd8-852e-7f2f62d4374b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:18:37 compute-0 nova_compute[188777]: 2026-02-19 20:18:37.411 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] VM Resumed (Lifecycle Event)
Feb 19 20:18:37 compute-0 nova_compute[188777]: 2026-02-19 20:18:37.415 188781 DEBUG nova.virt.libvirt.driver [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 19 20:18:37 compute-0 nova_compute[188777]: 2026-02-19 20:18:37.421 188781 INFO nova.virt.libvirt.driver [-] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Instance spawned successfully.
Feb 19 20:18:37 compute-0 nova_compute[188777]: 2026-02-19 20:18:37.422 188781 DEBUG nova.virt.libvirt.driver [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 19 20:18:37 compute-0 nova_compute[188777]: 2026-02-19 20:18:37.440 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:18:37 compute-0 nova_compute[188777]: 2026-02-19 20:18:37.453 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 19 20:18:37 compute-0 nova_compute[188777]: 2026-02-19 20:18:37.460 188781 DEBUG nova.virt.libvirt.driver [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:18:37 compute-0 nova_compute[188777]: 2026-02-19 20:18:37.460 188781 DEBUG nova.virt.libvirt.driver [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:18:37 compute-0 nova_compute[188777]: 2026-02-19 20:18:37.461 188781 DEBUG nova.virt.libvirt.driver [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:18:37 compute-0 nova_compute[188777]: 2026-02-19 20:18:37.462 188781 DEBUG nova.virt.libvirt.driver [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:18:37 compute-0 nova_compute[188777]: 2026-02-19 20:18:37.463 188781 DEBUG nova.virt.libvirt.driver [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:18:37 compute-0 nova_compute[188777]: 2026-02-19 20:18:37.463 188781 DEBUG nova.virt.libvirt.driver [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:18:37 compute-0 nova_compute[188777]: 2026-02-19 20:18:37.499 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 19 20:18:37 compute-0 nova_compute[188777]: 2026-02-19 20:18:37.537 188781 INFO nova.compute.manager [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Took 9.16 seconds to spawn the instance on the hypervisor.
Feb 19 20:18:37 compute-0 nova_compute[188777]: 2026-02-19 20:18:37.538 188781 DEBUG nova.compute.manager [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:18:37 compute-0 nova_compute[188777]: 2026-02-19 20:18:37.619 188781 INFO nova.compute.manager [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Took 9.78 seconds to build instance.
Feb 19 20:18:37 compute-0 nova_compute[188777]: 2026-02-19 20:18:37.635 188781 DEBUG oslo_concurrency.lockutils [None req-b0f74b3f-6cae-4297-95c9-793a79079e25 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "14ed9fe0-b150-4bd8-852e-7f2f62d4374b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.868s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:18:37 compute-0 nova_compute[188777]: 2026-02-19 20:18:37.682 188781 DEBUG nova.network.neutron [req-6e976bc2-613e-467f-8da5-97c2af0ace9f req-b9565e16-6027-4312-a554-002584468d40 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Updated VIF entry in instance network info cache for port 9838caff-8a65-491d-8b0d-3fb3d10c299c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 19 20:18:37 compute-0 nova_compute[188777]: 2026-02-19 20:18:37.683 188781 DEBUG nova.network.neutron [req-6e976bc2-613e-467f-8da5-97c2af0ace9f req-b9565e16-6027-4312-a554-002584468d40 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Updating instance_info_cache with network_info: [{"id": "9838caff-8a65-491d-8b0d-3fb3d10c299c", "address": "fa:16:3e:9c:8b:13", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.86", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9838caff-8a", "ovs_interfaceid": "9838caff-8a65-491d-8b0d-3fb3d10c299c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:18:37 compute-0 nova_compute[188777]: 2026-02-19 20:18:37.699 188781 DEBUG oslo_concurrency.lockutils [req-6e976bc2-613e-467f-8da5-97c2af0ace9f req-b9565e16-6027-4312-a554-002584468d40 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Releasing lock "refresh_cache-14ed9fe0-b150-4bd8-852e-7f2f62d4374b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:18:37 compute-0 nova_compute[188777]: 2026-02-19 20:18:37.763 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:18:38 compute-0 systemd[1]: Starting libvirt proxy daemon...
Feb 19 20:18:38 compute-0 systemd[1]: Started libvirt proxy daemon.
Feb 19 20:18:38 compute-0 nova_compute[188777]: 2026-02-19 20:18:38.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:18:38 compute-0 nova_compute[188777]: 2026-02-19 20:18:38.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:18:39 compute-0 nova_compute[188777]: 2026-02-19 20:18:39.260 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:18:39 compute-0 nova_compute[188777]: 2026-02-19 20:18:39.282 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:18:39 compute-0 nova_compute[188777]: 2026-02-19 20:18:39.283 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:18:39 compute-0 nova_compute[188777]: 2026-02-19 20:18:39.283 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 19 20:18:39 compute-0 nova_compute[188777]: 2026-02-19 20:18:39.436 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "refresh_cache-5aaac42d-946d-4c6f-9bde-23b8b6613b59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:18:39 compute-0 nova_compute[188777]: 2026-02-19 20:18:39.436 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquired lock "refresh_cache-5aaac42d-946d-4c6f-9bde-23b8b6613b59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:18:39 compute-0 nova_compute[188777]: 2026-02-19 20:18:39.437 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 19 20:18:39 compute-0 nova_compute[188777]: 2026-02-19 20:18:39.437 188781 DEBUG nova.objects.instance [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5aaac42d-946d-4c6f-9bde-23b8b6613b59 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:18:39 compute-0 nova_compute[188777]: 2026-02-19 20:18:39.519 188781 DEBUG nova.compute.manager [req-41da037c-37d7-4010-8bf3-8571ae28f634 req-c7abfac7-52a5-42a4-8545-9f5d144e1b76 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Received event network-vif-plugged-9838caff-8a65-491d-8b0d-3fb3d10c299c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:18:39 compute-0 nova_compute[188777]: 2026-02-19 20:18:39.520 188781 DEBUG oslo_concurrency.lockutils [req-41da037c-37d7-4010-8bf3-8571ae28f634 req-c7abfac7-52a5-42a4-8545-9f5d144e1b76 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "14ed9fe0-b150-4bd8-852e-7f2f62d4374b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:18:39 compute-0 nova_compute[188777]: 2026-02-19 20:18:39.520 188781 DEBUG oslo_concurrency.lockutils [req-41da037c-37d7-4010-8bf3-8571ae28f634 req-c7abfac7-52a5-42a4-8545-9f5d144e1b76 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "14ed9fe0-b150-4bd8-852e-7f2f62d4374b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:18:39 compute-0 nova_compute[188777]: 2026-02-19 20:18:39.520 188781 DEBUG oslo_concurrency.lockutils [req-41da037c-37d7-4010-8bf3-8571ae28f634 req-c7abfac7-52a5-42a4-8545-9f5d144e1b76 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "14ed9fe0-b150-4bd8-852e-7f2f62d4374b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:18:39 compute-0 nova_compute[188777]: 2026-02-19 20:18:39.520 188781 DEBUG nova.compute.manager [req-41da037c-37d7-4010-8bf3-8571ae28f634 req-c7abfac7-52a5-42a4-8545-9f5d144e1b76 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] No waiting events found dispatching network-vif-plugged-9838caff-8a65-491d-8b0d-3fb3d10c299c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 19 20:18:39 compute-0 nova_compute[188777]: 2026-02-19 20:18:39.520 188781 WARNING nova.compute.manager [req-41da037c-37d7-4010-8bf3-8571ae28f634 req-c7abfac7-52a5-42a4-8545-9f5d144e1b76 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Received unexpected event network-vif-plugged-9838caff-8a65-491d-8b0d-3fb3d10c299c for instance with vm_state active and task_state None.
Feb 19 20:18:40 compute-0 nova_compute[188777]: 2026-02-19 20:18:40.543 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:18:40 compute-0 nova_compute[188777]: 2026-02-19 20:18:40.728 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Updating instance_info_cache with network_info: [{"id": "10027d6c-43cc-4a7c-be42-a49c8c914f25", "address": "fa:16:3e:e4:9e:14", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.193", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10027d6c-43", "ovs_interfaceid": "10027d6c-43cc-4a7c-be42-a49c8c914f25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:18:40 compute-0 nova_compute[188777]: 2026-02-19 20:18:40.748 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Releasing lock "refresh_cache-5aaac42d-946d-4c6f-9bde-23b8b6613b59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:18:40 compute-0 nova_compute[188777]: 2026-02-19 20:18:40.748 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 19 20:18:40 compute-0 nova_compute[188777]: 2026-02-19 20:18:40.749 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:18:40 compute-0 nova_compute[188777]: 2026-02-19 20:18:40.750 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:18:40 compute-0 nova_compute[188777]: 2026-02-19 20:18:40.777 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:18:40 compute-0 nova_compute[188777]: 2026-02-19 20:18:40.778 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:18:40 compute-0 nova_compute[188777]: 2026-02-19 20:18:40.778 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:18:40 compute-0 nova_compute[188777]: 2026-02-19 20:18:40.779 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:18:40 compute-0 nova_compute[188777]: 2026-02-19 20:18:40.867 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:18:40 compute-0 nova_compute[188777]: 2026-02-19 20:18:40.958 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:18:40 compute-0 nova_compute[188777]: 2026-02-19 20:18:40.959 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:18:41 compute-0 nova_compute[188777]: 2026-02-19 20:18:41.039 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:18:41 compute-0 nova_compute[188777]: 2026-02-19 20:18:41.040 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:18:41 compute-0 nova_compute[188777]: 2026-02-19 20:18:41.101 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:18:41 compute-0 nova_compute[188777]: 2026-02-19 20:18:41.101 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:18:41 compute-0 nova_compute[188777]: 2026-02-19 20:18:41.150 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:18:41 compute-0 nova_compute[188777]: 2026-02-19 20:18:41.162 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:18:41 compute-0 nova_compute[188777]: 2026-02-19 20:18:41.254 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:18:41 compute-0 nova_compute[188777]: 2026-02-19 20:18:41.256 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:18:41 compute-0 nova_compute[188777]: 2026-02-19 20:18:41.335 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:18:41 compute-0 nova_compute[188777]: 2026-02-19 20:18:41.336 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:18:41 compute-0 nova_compute[188777]: 2026-02-19 20:18:41.386 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.eph0 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:18:41 compute-0 nova_compute[188777]: 2026-02-19 20:18:41.387 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:18:41 compute-0 nova_compute[188777]: 2026-02-19 20:18:41.475 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.eph0 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:18:41 compute-0 nova_compute[188777]: 2026-02-19 20:18:41.495 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:18:41 compute-0 nova_compute[188777]: 2026-02-19 20:18:41.550 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:18:41 compute-0 nova_compute[188777]: 2026-02-19 20:18:41.551 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:18:41 compute-0 nova_compute[188777]: 2026-02-19 20:18:41.601 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:18:41 compute-0 nova_compute[188777]: 2026-02-19 20:18:41.602 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:18:41 compute-0 nova_compute[188777]: 2026-02-19 20:18:41.649 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0 --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:18:41 compute-0 nova_compute[188777]: 2026-02-19 20:18:41.650 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:18:41 compute-0 nova_compute[188777]: 2026-02-19 20:18:41.695 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0 --force-share --output=json" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:18:42 compute-0 nova_compute[188777]: 2026-02-19 20:18:42.047 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:18:42 compute-0 nova_compute[188777]: 2026-02-19 20:18:42.049 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4966MB free_disk=72.22541809082031GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:18:42 compute-0 nova_compute[188777]: 2026-02-19 20:18:42.049 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:18:42 compute-0 nova_compute[188777]: 2026-02-19 20:18:42.049 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:18:42 compute-0 nova_compute[188777]: 2026-02-19 20:18:42.130 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 5aaac42d-946d-4c6f-9bde-23b8b6613b59 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:18:42 compute-0 nova_compute[188777]: 2026-02-19 20:18:42.130 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 0975826c-6016-48c8-a7dd-1b10a32f91ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:18:42 compute-0 nova_compute[188777]: 2026-02-19 20:18:42.130 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 14ed9fe0-b150-4bd8-852e-7f2f62d4374b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:18:42 compute-0 nova_compute[188777]: 2026-02-19 20:18:42.130 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:18:42 compute-0 nova_compute[188777]: 2026-02-19 20:18:42.131 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:18:42 compute-0 nova_compute[188777]: 2026-02-19 20:18:42.156 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Refreshing inventories for resource provider c266959e-952e-41ad-bc2e-56513f39ec2d _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 19 20:18:42 compute-0 nova_compute[188777]: 2026-02-19 20:18:42.181 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Updating ProviderTree inventory for provider c266959e-952e-41ad-bc2e-56513f39ec2d from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 19 20:18:42 compute-0 nova_compute[188777]: 2026-02-19 20:18:42.181 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Updating inventory in ProviderTree for provider c266959e-952e-41ad-bc2e-56513f39ec2d with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 19 20:18:42 compute-0 nova_compute[188777]: 2026-02-19 20:18:42.196 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Refreshing aggregate associations for resource provider c266959e-952e-41ad-bc2e-56513f39ec2d, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 19 20:18:42 compute-0 nova_compute[188777]: 2026-02-19 20:18:42.239 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Refreshing trait associations for resource provider c266959e-952e-41ad-bc2e-56513f39ec2d, traits: HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,HW_CPU_X86_SHA,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NODE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,HW_CPU_X86_F16C,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_ABM,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_MMX,HW_CPU_X86_BMI2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 19 20:18:42 compute-0 nova_compute[188777]: 2026-02-19 20:18:42.309 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:18:42 compute-0 nova_compute[188777]: 2026-02-19 20:18:42.325 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:18:42 compute-0 nova_compute[188777]: 2026-02-19 20:18:42.348 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:18:42 compute-0 nova_compute[188777]: 2026-02-19 20:18:42.349 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.299s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:18:42 compute-0 sshd-session[244557]: Invalid user redhat from 103.119.94.10 port 36138
Feb 19 20:18:42 compute-0 sshd-session[244557]: Received disconnect from 103.119.94.10 port 36138:11: Bye Bye [preauth]
Feb 19 20:18:42 compute-0 sshd-session[244557]: Disconnected from invalid user redhat 103.119.94.10 port 36138 [preauth]
Feb 19 20:18:42 compute-0 nova_compute[188777]: 2026-02-19 20:18:42.767 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:18:43 compute-0 nova_compute[188777]: 2026-02-19 20:18:43.863 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:18:45 compute-0 nova_compute[188777]: 2026-02-19 20:18:45.546 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:18:47 compute-0 nova_compute[188777]: 2026-02-19 20:18:47.769 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:18:48 compute-0 podman[244598]: 2026-02-19 20:18:48.416423108 +0000 UTC m=+0.086863948 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 19 20:18:48 compute-0 podman[244597]: 2026-02-19 20:18:48.427004065 +0000 UTC m=+0.099586781 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.openshift.expose-services=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, name=ubi9/ubi-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=ubi9-minimal-container, release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., architecture=x86_64)
Feb 19 20:18:50 compute-0 nova_compute[188777]: 2026-02-19 20:18:50.552 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:18:52 compute-0 nova_compute[188777]: 2026-02-19 20:18:52.771 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:18:53 compute-0 podman[244639]: 2026-02-19 20:18:53.392333087 +0000 UTC m=+0.082345809 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 19 20:18:55 compute-0 nova_compute[188777]: 2026-02-19 20:18:55.554 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:18:56 compute-0 podman[244656]: 2026-02-19 20:18:56.379563858 +0000 UTC m=+0.066770686 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, name=ubi9, vcs-type=git, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.buildah.version=1.29.0, managed_by=edpm_ansible, version=9.4, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, config_id=kepler, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30)
Feb 19 20:18:56 compute-0 podman[244657]: 2026-02-19 20:18:56.445777986 +0000 UTC m=+0.125891085 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Feb 19 20:18:57 compute-0 nova_compute[188777]: 2026-02-19 20:18:57.774 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:18:59 compute-0 podman[204724]: time="2026-02-19T20:18:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:18:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:18:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 19 20:18:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:18:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4362 "" "Go-http-client/1.1"
Feb 19 20:19:00 compute-0 podman[244691]: 2026-02-19 20:19:00.434643006 +0000 UTC m=+0.113635395 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 19 20:19:00 compute-0 nova_compute[188777]: 2026-02-19 20:19:00.557 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:19:01 compute-0 openstack_network_exporter[207898]: ERROR   20:19:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:19:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:19:01 compute-0 openstack_network_exporter[207898]: ERROR   20:19:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:19:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:19:02 compute-0 nova_compute[188777]: 2026-02-19 20:19:02.776 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:19:03 compute-0 podman[244714]: 2026-02-19 20:19:03.387227016 +0000 UTC m=+0.073460043 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, tcib_managed=true, io.buildah.version=1.43.0)
Feb 19 20:19:05 compute-0 nova_compute[188777]: 2026-02-19 20:19:05.559 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:19:06 compute-0 ovn_controller[98843]: 2026-02-19T20:19:06Z|00044|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Feb 19 20:19:07 compute-0 podman[244731]: 2026-02-19 20:19:07.436464393 +0000 UTC m=+0.121538359 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 19 20:19:07 compute-0 nova_compute[188777]: 2026-02-19 20:19:07.778 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:19:09 compute-0 ovn_controller[98843]: 2026-02-19T20:19:09Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9c:8b:13 192.168.0.86
Feb 19 20:19:09 compute-0 ovn_controller[98843]: 2026-02-19T20:19:09Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9c:8b:13 192.168.0.86
Feb 19 20:19:10 compute-0 nova_compute[188777]: 2026-02-19 20:19:10.570 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:19:11 compute-0 sshd-session[244761]: Received disconnect from 125.31.2.160 port 39516:11: Bye Bye [preauth]
Feb 19 20:19:11 compute-0 sshd-session[244761]: Disconnected from authenticating user root 125.31.2.160 port 39516 [preauth]
Feb 19 20:19:12 compute-0 nova_compute[188777]: 2026-02-19 20:19:12.781 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:19:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:15.138 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 19 20:19:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:15.139 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 19 20:19:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:15.139 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:19:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:15.139 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fa4f6728800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:19:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:15.140 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:19:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:15.140 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:19:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:15.140 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:19:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:15.140 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:19:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:15.140 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:19:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:15.140 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:19:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:15.140 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:19:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:19:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:19:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a390>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:19:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:19:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:19:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:19:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:19:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:19:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:19:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:19:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:19:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:19:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:19:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:19:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:19:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:19:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:19:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:19:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:15.144 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '5aaac42d-946d-4c6f-9bde-23b8b6613b59', 'name': 'test_0', 'flavor': {'id': '8030bc1a-9afb-4678-ac07-8b59a1275925', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e1a79c75-2fa3-410d-9c4c-91db3eeca51d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '59f01dee51a74ac1a9f82733f591827d', 'user_id': '9f5597a45dc34ee19bcfe938afde768f', 'hostId': 'fd9f80e206ee2256ddb900effab6d3e51f96886da6d1a8f886ddbab7', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:19:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:15.146 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 14ed9fe0-b150-4bd8-852e-7f2f62d4374b from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Feb 19 20:19:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:15.149 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/14ed9fe0-b150-4bd8-852e-7f2f62d4374b -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}eb82bb0a04ff18fe5ce8169193b61d179e0542ea510a5cad5008c259e31f58a8" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Feb 19 20:19:15 compute-0 nova_compute[188777]: 2026-02-19 20:19:15.574 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.323 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Thu, 19 Feb 2026 20:19:15 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-e680bcf1-9dae-4534-8b29-840a1d5dc66f x-openstack-request-id: req-e680bcf1-9dae-4534-8b29-840a1d5dc66f _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.323 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "14ed9fe0-b150-4bd8-852e-7f2f62d4374b", "name": "vn-h4amqsx-zdyrztqs2ra5-eeiurm4z7i6z-vnf-hs7qdifsqkdp", "status": "ACTIVE", "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "user_id": "9f5597a45dc34ee19bcfe938afde768f", "metadata": {"metering.server_group": "78adc0ea-8772-4283-8bd6-6dbdcecee09e"}, "hostId": "fd9f80e206ee2256ddb900effab6d3e51f96886da6d1a8f886ddbab7", "image": {"id": "e1a79c75-2fa3-410d-9c4c-91db3eeca51d", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/e1a79c75-2fa3-410d-9c4c-91db3eeca51d"}]}, "flavor": {"id": "8030bc1a-9afb-4678-ac07-8b59a1275925", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/8030bc1a-9afb-4678-ac07-8b59a1275925"}]}, "created": "2026-02-19T20:18:26Z", "updated": "2026-02-19T20:18:37Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.86", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:9c:8b:13"}, {"version": 4, "addr": "192.168.122.207", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:9c:8b:13"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/14ed9fe0-b150-4bd8-852e-7f2f62d4374b"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/14ed9fe0-b150-4bd8-852e-7f2f62d4374b"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-02-19T20:18:37.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.323 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/14ed9fe0-b150-4bd8-852e-7f2f62d4374b used request id req-e680bcf1-9dae-4534-8b29-840a1d5dc66f request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.324 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '14ed9fe0-b150-4bd8-852e-7f2f62d4374b', 'name': 'vn-h4amqsx-zdyrztqs2ra5-eeiurm4z7i6z-vnf-hs7qdifsqkdp', 'flavor': {'id': '8030bc1a-9afb-4678-ac07-8b59a1275925', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e1a79c75-2fa3-410d-9c4c-91db3eeca51d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '59f01dee51a74ac1a9f82733f591827d', 'user_id': '9f5597a45dc34ee19bcfe938afde768f', 'hostId': 'fd9f80e206ee2256ddb900effab6d3e51f96886da6d1a8f886ddbab7', 'status': 'active', 'metadata': {'metering.server_group': '78adc0ea-8772-4283-8bd6-6dbdcecee09e'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.328 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '0975826c-6016-48c8-a7dd-1b10a32f91ba', 'name': 'vn-h4amqsx-kmyzbqhhqloy-unhgieiyt6e3-vnf-p7rghgh5js3a', 'flavor': {'id': '8030bc1a-9afb-4678-ac07-8b59a1275925', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e1a79c75-2fa3-410d-9c4c-91db3eeca51d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '59f01dee51a74ac1a9f82733f591827d', 'user_id': '9f5597a45dc34ee19bcfe938afde768f', 'hostId': 'fd9f80e206ee2256ddb900effab6d3e51f96886da6d1a8f886ddbab7', 'status': 'active', 'metadata': {'metering.server_group': '78adc0ea-8772-4283-8bd6-6dbdcecee09e'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.329 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.329 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.329 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.329 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.330 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-19T20:19:17.329851) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.335 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.340 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 14ed9fe0-b150-4bd8-852e-7f2f62d4374b / tap9838caff-8a inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.340 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.345 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.346 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.347 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fa4f672a480>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.347 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.347 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.347 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.348 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.348 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.348 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-02-19T20:19:17.348056) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.348 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-h4amqsx-zdyrztqs2ra5-eeiurm4z7i6z-vnf-hs7qdifsqkdp>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-h4amqsx-zdyrztqs2ra5-eeiurm4z7i6z-vnf-hs7qdifsqkdp>]
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.349 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fa4f672a180>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.349 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.350 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.350 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.350 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.350 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.350 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-19T20:19:17.350510) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.351 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/network.outgoing.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.351 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.outgoing.packets volume: 41 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.352 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.352 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fa4f672bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.352 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.353 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.353 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.353 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.353 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.353 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-19T20:19:17.353554) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.354 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.354 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.355 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.355 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fa4f672a270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.355 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.356 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.356 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.356 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.357 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.357 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-19T20:19:17.356693) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.357 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/network.outgoing.bytes volume: 1666 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.357 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.outgoing.bytes volume: 4764 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.358 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.358 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fa4f6728ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.358 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.359 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.359 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.359 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.360 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-19T20:19:17.359711) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.385 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.408 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.432 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.432 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.432 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fa4f672a300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.433 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.433 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.433 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.433 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.433 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.433 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.433 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.434 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.434 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fa4f672ab70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.434 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.434 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.434 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.434 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.435 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-19T20:19:17.433301) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.435 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-19T20:19:17.434575) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.464 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.464 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.464 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.489 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.489 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.490 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.514 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.514 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.514 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.515 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.516 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fa4f6728290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.516 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.516 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.516 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.516 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.517 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-19T20:19:17.516853) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.595 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.595 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.596 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.678 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.678 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.678 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.762 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.763 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.763 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.763 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.764 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fa4f69216a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.764 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.764 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.764 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.765 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.765 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/cpu volume: 36000000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.765 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/cpu volume: 31190000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.766 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/cpu volume: 265640000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.767 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.767 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fa4f67286b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.767 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.767 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a390>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.768 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a390>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.768 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.768 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.768 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-19T20:19:17.764971) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.769 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-h4amqsx-zdyrztqs2ra5-eeiurm4z7i6z-vnf-hs7qdifsqkdp>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-h4amqsx-zdyrztqs2ra5-eeiurm4z7i6z-vnf-hs7qdifsqkdp>]
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.769 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-02-19T20:19:17.768470) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.769 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fa4f67283b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.769 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.769 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.769 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.769 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.769 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.latency volume: 658474829 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.770 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.latency volume: 116712843 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.770 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.latency volume: 151528840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.770 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.read.latency volume: 786473372 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.771 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.read.latency volume: 127444335 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.771 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.read.latency volume: 200419857 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.771 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.latency volume: 699163782 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.771 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.latency volume: 126021412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.772 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.latency volume: 99179876 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.772 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-19T20:19:17.769705) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.773 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.773 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fa4f672a120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.773 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.774 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.774 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.774 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.775 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.775 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-19T20:19:17.774724) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.775 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.776 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.776 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.777 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fa4f672a1b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.777 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.777 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.777 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.778 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.778 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.778 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-19T20:19:17.778135) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.779 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.779 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.780 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.780 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fa4f6728410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.780 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.781 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.781 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.781 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.781 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.782 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.782 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.783 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.783 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.784 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.784 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.784 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-19T20:19:17.781644) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.785 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.785 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.786 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.786 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fa4f672a150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.786 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 19 20:19:17 compute-0 nova_compute[188777]: 2026-02-19 20:19:17.784 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.786 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.787 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.787 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.788 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.788 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-19T20:19:17.787598) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.788 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.789 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.789 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.790 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fa4f6728470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.790 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.790 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.791 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.791 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.792 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.791 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-19T20:19:17.791572) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.792 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.793 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.793 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.793 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.794 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.794 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.795 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.795 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.796 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.796 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fa4f68f6030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.797 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.797 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.797 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.797 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.798 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.798 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-19T20:19:17.797845) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.798 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.799 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.799 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.write.bytes volume: 41697280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.800 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.800 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.800 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.801 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.801 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.802 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.802 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fa4f672ab10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.803 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.803 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.803 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.803 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.804 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-19T20:19:17.803853) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.804 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.804 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.805 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.805 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.806 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.806 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.807 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.allocation volume: 21962752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.807 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.808 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.808 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.808 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fa4f6728500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.809 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.809 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.809 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.809 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.809 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.latency volume: 2413036213 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.810 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.latency volume: 10941917 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.810 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-19T20:19:17.809649) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.810 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.810 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.write.latency volume: 1242632764 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.811 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.write.latency volume: 16674926 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.811 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.811 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.latency volume: 1796614628 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.812 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.latency volume: 9187833 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.812 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.812 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.813 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fa4f672a0c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.813 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.813 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.813 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.813 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.814 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.814 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.814 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.815 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.814 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-19T20:19:17.813830) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.815 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fa4f6728560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.815 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.815 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.815 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.816 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.816 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.816 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.816 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-19T20:19:17.816122) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.817 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.817 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.write.requests volume: 222 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.817 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.818 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.818 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.requests volume: 237 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.818 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.819 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.819 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.819 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fa4f67285c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.819 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.820 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.820 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.820 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.821 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.821 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fa4f6728620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.821 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.821 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.822 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.822 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.822 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-19T20:19:17.820465) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.822 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.823 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fa4f672be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.823 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.823 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.823 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.824 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.824 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/memory.usage volume: 48.765625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.824 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/memory.usage volume: 49.58203125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.824 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-19T20:19:17.822306) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.824 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-19T20:19:17.824105) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.825 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/memory.usage volume: 49.08984375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.825 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.825 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fa4f672be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.825 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.826 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.826 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.826 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.826 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.bytes volume: 2052 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.827 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.827 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.incoming.bytes volume: 4933 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.827 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.829 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.829 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.829 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.830 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.830 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.830 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.830 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.831 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.831 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.831 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.831 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.831 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.832 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.832 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.832 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.832 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.832 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.832 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.833 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.833 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.833 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.833 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.833 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.834 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.834 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.834 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:19:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:19:17.834 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-19T20:19:17.826406) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:19:19 compute-0 podman[244765]: 2026-02-19 20:19:19.371391015 +0000 UTC m=+0.052269128 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 19 20:19:19 compute-0 podman[244764]: 2026-02-19 20:19:19.376809222 +0000 UTC m=+0.062939217 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, release=1770267347, build-date=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7)
Feb 19 20:19:20 compute-0 nova_compute[188777]: 2026-02-19 20:19:20.577 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:19:22 compute-0 nova_compute[188777]: 2026-02-19 20:19:22.786 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:19:24 compute-0 podman[244808]: 2026-02-19 20:19:24.395773312 +0000 UTC m=+0.074115643 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 19 20:19:25 compute-0 nova_compute[188777]: 2026-02-19 20:19:25.580 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:19:27 compute-0 podman[244828]: 2026-02-19 20:19:27.414427435 +0000 UTC m=+0.101081237 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Feb 19 20:19:27 compute-0 podman[244827]: 2026-02-19 20:19:27.44755642 +0000 UTC m=+0.128419223 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, name=ubi9, architecture=x86_64, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, version=9.4, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, config_id=kepler)
Feb 19 20:19:27 compute-0 nova_compute[188777]: 2026-02-19 20:19:27.789 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:19:29 compute-0 podman[204724]: time="2026-02-19T20:19:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:19:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:19:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 19 20:19:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:19:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4365 "" "Go-http-client/1.1"
Feb 19 20:19:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:19:30.430 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:19:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:19:30.431 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:19:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:19:30.431 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:19:30 compute-0 nova_compute[188777]: 2026-02-19 20:19:30.584 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:19:31 compute-0 openstack_network_exporter[207898]: ERROR   20:19:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:19:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:19:31 compute-0 podman[244867]: 2026-02-19 20:19:31.428355841 +0000 UTC m=+0.113617945 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 19 20:19:31 compute-0 openstack_network_exporter[207898]: ERROR   20:19:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:19:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:19:32 compute-0 nova_compute[188777]: 2026-02-19 20:19:32.793 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:19:33 compute-0 sshd-session[244891]: Invalid user iksi from 103.250.11.249 port 51842
Feb 19 20:19:33 compute-0 sshd-session[244891]: Received disconnect from 103.250.11.249 port 51842:11: Bye Bye [preauth]
Feb 19 20:19:33 compute-0 sshd-session[244891]: Disconnected from invalid user iksi 103.250.11.249 port 51842 [preauth]
Feb 19 20:19:34 compute-0 podman[244895]: 2026-02-19 20:19:34.420619317 +0000 UTC m=+0.095294917 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260216, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:19:35 compute-0 nova_compute[188777]: 2026-02-19 20:19:35.587 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:19:36 compute-0 nova_compute[188777]: 2026-02-19 20:19:36.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:19:36 compute-0 nova_compute[188777]: 2026-02-19 20:19:36.265 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:19:36 compute-0 nova_compute[188777]: 2026-02-19 20:19:36.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:19:36 compute-0 nova_compute[188777]: 2026-02-19 20:19:36.265 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 19 20:19:36 compute-0 sshd-session[244913]: Received disconnect from 103.179.56.24 port 43992:11: Bye Bye [preauth]
Feb 19 20:19:36 compute-0 sshd-session[244913]: Disconnected from authenticating user root 103.179.56.24 port 43992 [preauth]
Feb 19 20:19:37 compute-0 nova_compute[188777]: 2026-02-19 20:19:37.282 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:19:37 compute-0 nova_compute[188777]: 2026-02-19 20:19:37.796 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:19:38 compute-0 nova_compute[188777]: 2026-02-19 20:19:38.260 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:19:38 compute-0 nova_compute[188777]: 2026-02-19 20:19:38.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:19:38 compute-0 podman[244915]: 2026-02-19 20:19:38.444446819 +0000 UTC m=+0.119347371 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true)
Feb 19 20:19:39 compute-0 nova_compute[188777]: 2026-02-19 20:19:39.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:19:40 compute-0 nova_compute[188777]: 2026-02-19 20:19:40.589 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:19:41 compute-0 nova_compute[188777]: 2026-02-19 20:19:41.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:19:41 compute-0 nova_compute[188777]: 2026-02-19 20:19:41.266 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:19:41 compute-0 nova_compute[188777]: 2026-02-19 20:19:41.472 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "refresh_cache-0975826c-6016-48c8-a7dd-1b10a32f91ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:19:41 compute-0 nova_compute[188777]: 2026-02-19 20:19:41.473 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquired lock "refresh_cache-0975826c-6016-48c8-a7dd-1b10a32f91ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:19:41 compute-0 nova_compute[188777]: 2026-02-19 20:19:41.473 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 19 20:19:42 compute-0 nova_compute[188777]: 2026-02-19 20:19:42.720 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Updating instance_info_cache with network_info: [{"id": "db2ce91f-7740-44a2-bab1-8455e2dfddde", "address": "fa:16:3e:4d:93:1a", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.213", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb2ce91f-77", "ovs_interfaceid": "db2ce91f-7740-44a2-bab1-8455e2dfddde", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:19:42 compute-0 nova_compute[188777]: 2026-02-19 20:19:42.742 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Releasing lock "refresh_cache-0975826c-6016-48c8-a7dd-1b10a32f91ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:19:42 compute-0 nova_compute[188777]: 2026-02-19 20:19:42.743 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 19 20:19:42 compute-0 nova_compute[188777]: 2026-02-19 20:19:42.744 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:19:42 compute-0 nova_compute[188777]: 2026-02-19 20:19:42.744 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:19:42 compute-0 nova_compute[188777]: 2026-02-19 20:19:42.772 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:19:42 compute-0 nova_compute[188777]: 2026-02-19 20:19:42.773 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:19:42 compute-0 nova_compute[188777]: 2026-02-19 20:19:42.773 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:19:42 compute-0 nova_compute[188777]: 2026-02-19 20:19:42.774 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:19:42 compute-0 nova_compute[188777]: 2026-02-19 20:19:42.797 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:19:42 compute-0 nova_compute[188777]: 2026-02-19 20:19:42.901 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:19:42 compute-0 nova_compute[188777]: 2026-02-19 20:19:42.964 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:19:42 compute-0 nova_compute[188777]: 2026-02-19 20:19:42.965 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:19:43 compute-0 nova_compute[188777]: 2026-02-19 20:19:43.018 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:19:43 compute-0 nova_compute[188777]: 2026-02-19 20:19:43.023 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:19:43 compute-0 sshd-session[244893]: error: kex_exchange_identification: read: Connection timed out
Feb 19 20:19:43 compute-0 sshd-session[244893]: banner exchange: Connection from 125.94.106.195 port 47390: Connection timed out
Feb 19 20:19:43 compute-0 nova_compute[188777]: 2026-02-19 20:19:43.093 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:19:43 compute-0 nova_compute[188777]: 2026-02-19 20:19:43.095 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:19:43 compute-0 nova_compute[188777]: 2026-02-19 20:19:43.149 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:19:43 compute-0 nova_compute[188777]: 2026-02-19 20:19:43.156 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:19:43 compute-0 nova_compute[188777]: 2026-02-19 20:19:43.200 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk --force-share --output=json" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:19:43 compute-0 nova_compute[188777]: 2026-02-19 20:19:43.202 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:19:43 compute-0 nova_compute[188777]: 2026-02-19 20:19:43.254 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:19:43 compute-0 nova_compute[188777]: 2026-02-19 20:19:43.255 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:19:43 compute-0 nova_compute[188777]: 2026-02-19 20:19:43.337 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.eph0 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:19:43 compute-0 nova_compute[188777]: 2026-02-19 20:19:43.338 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:19:43 compute-0 nova_compute[188777]: 2026-02-19 20:19:43.417 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:19:43 compute-0 nova_compute[188777]: 2026-02-19 20:19:43.424 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:19:43 compute-0 nova_compute[188777]: 2026-02-19 20:19:43.494 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:19:43 compute-0 nova_compute[188777]: 2026-02-19 20:19:43.495 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:19:43 compute-0 nova_compute[188777]: 2026-02-19 20:19:43.578 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:19:43 compute-0 nova_compute[188777]: 2026-02-19 20:19:43.580 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:19:43 compute-0 nova_compute[188777]: 2026-02-19 20:19:43.660 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:19:43 compute-0 nova_compute[188777]: 2026-02-19 20:19:43.663 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:19:43 compute-0 nova_compute[188777]: 2026-02-19 20:19:43.732 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:19:44 compute-0 nova_compute[188777]: 2026-02-19 20:19:44.127 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:19:44 compute-0 nova_compute[188777]: 2026-02-19 20:19:44.130 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4895MB free_disk=72.20392608642578GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:19:44 compute-0 nova_compute[188777]: 2026-02-19 20:19:44.130 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:19:44 compute-0 nova_compute[188777]: 2026-02-19 20:19:44.131 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:19:44 compute-0 nova_compute[188777]: 2026-02-19 20:19:44.246 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 5aaac42d-946d-4c6f-9bde-23b8b6613b59 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:19:44 compute-0 nova_compute[188777]: 2026-02-19 20:19:44.248 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 0975826c-6016-48c8-a7dd-1b10a32f91ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:19:44 compute-0 nova_compute[188777]: 2026-02-19 20:19:44.248 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 14ed9fe0-b150-4bd8-852e-7f2f62d4374b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:19:44 compute-0 nova_compute[188777]: 2026-02-19 20:19:44.249 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:19:44 compute-0 nova_compute[188777]: 2026-02-19 20:19:44.250 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:19:44 compute-0 nova_compute[188777]: 2026-02-19 20:19:44.400 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:19:44 compute-0 nova_compute[188777]: 2026-02-19 20:19:44.417 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:19:44 compute-0 nova_compute[188777]: 2026-02-19 20:19:44.420 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:19:44 compute-0 nova_compute[188777]: 2026-02-19 20:19:44.420 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.289s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:19:44 compute-0 nova_compute[188777]: 2026-02-19 20:19:44.421 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:19:45 compute-0 nova_compute[188777]: 2026-02-19 20:19:45.592 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:19:45 compute-0 nova_compute[188777]: 2026-02-19 20:19:45.950 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:19:47 compute-0 nova_compute[188777]: 2026-02-19 20:19:47.800 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:19:49 compute-0 nova_compute[188777]: 2026-02-19 20:19:49.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:19:49 compute-0 nova_compute[188777]: 2026-02-19 20:19:49.264 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 19 20:19:49 compute-0 nova_compute[188777]: 2026-02-19 20:19:49.283 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 19 20:19:50 compute-0 podman[244980]: 2026-02-19 20:19:50.392550909 +0000 UTC m=+0.076811076 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 19 20:19:50 compute-0 podman[244979]: 2026-02-19 20:19:50.394316014 +0000 UTC m=+0.077280312 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, container_name=openstack_network_exporter, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, distribution-scope=public, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Feb 19 20:19:50 compute-0 nova_compute[188777]: 2026-02-19 20:19:50.594 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:19:52 compute-0 nova_compute[188777]: 2026-02-19 20:19:52.802 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:19:55 compute-0 podman[245024]: 2026-02-19 20:19:55.410265891 +0000 UTC m=+0.086170816 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent)
Feb 19 20:19:55 compute-0 nova_compute[188777]: 2026-02-19 20:19:55.596 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:19:57 compute-0 nova_compute[188777]: 2026-02-19 20:19:57.805 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:19:58 compute-0 podman[245045]: 2026-02-19 20:19:58.410851985 +0000 UTC m=+0.095391702 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible)
Feb 19 20:19:58 compute-0 podman[245044]: 2026-02-19 20:19:58.452868044 +0000 UTC m=+0.133530770 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, config_id=kepler, container_name=kepler, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, version=9.4, io.buildah.version=1.29.0, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, release-0.7.12=)
Feb 19 20:19:59 compute-0 podman[204724]: time="2026-02-19T20:19:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:19:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:19:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 19 20:19:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:19:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4364 "" "Go-http-client/1.1"
Feb 19 20:20:00 compute-0 nova_compute[188777]: 2026-02-19 20:20:00.598 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:20:01 compute-0 openstack_network_exporter[207898]: ERROR   20:20:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:20:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:20:01 compute-0 openstack_network_exporter[207898]: ERROR   20:20:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:20:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:20:02 compute-0 podman[245083]: 2026-02-19 20:20:02.447706289 +0000 UTC m=+0.121392065 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 19 20:20:02 compute-0 nova_compute[188777]: 2026-02-19 20:20:02.808 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:20:05 compute-0 podman[245107]: 2026-02-19 20:20:05.453613779 +0000 UTC m=+0.126215505 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260216, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2)
Feb 19 20:20:05 compute-0 nova_compute[188777]: 2026-02-19 20:20:05.602 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:20:07 compute-0 nova_compute[188777]: 2026-02-19 20:20:07.814 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:20:09 compute-0 podman[245127]: 2026-02-19 20:20:09.488560694 +0000 UTC m=+0.162926640 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, io.buildah.version=1.41.3)
Feb 19 20:20:10 compute-0 nova_compute[188777]: 2026-02-19 20:20:10.605 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:20:12 compute-0 nova_compute[188777]: 2026-02-19 20:20:12.816 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:20:15 compute-0 nova_compute[188777]: 2026-02-19 20:20:15.608 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:20:17 compute-0 nova_compute[188777]: 2026-02-19 20:20:17.819 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:20:20 compute-0 nova_compute[188777]: 2026-02-19 20:20:20.611 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:20:21 compute-0 podman[245155]: 2026-02-19 20:20:21.41647017 +0000 UTC m=+0.084706501 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 19 20:20:21 compute-0 podman[245154]: 2026-02-19 20:20:21.44199126 +0000 UTC m=+0.118213688 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.7, build-date=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, managed_by=edpm_ansible, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=openstack_network_exporter)
Feb 19 20:20:22 compute-0 nova_compute[188777]: 2026-02-19 20:20:22.821 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:20:24 compute-0 nova_compute[188777]: 2026-02-19 20:20:24.418 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:20:24 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:20:24.421 108175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1e:ad:15', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:0d:ba:1d:25:53'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 19 20:20:24 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:20:24.422 108175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 19 20:20:25 compute-0 nova_compute[188777]: 2026-02-19 20:20:25.613 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:20:26 compute-0 podman[245200]: 2026-02-19 20:20:26.439375462 +0000 UTC m=+0.110010864 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Feb 19 20:20:27 compute-0 nova_compute[188777]: 2026-02-19 20:20:27.824 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:20:29 compute-0 podman[245218]: 2026-02-19 20:20:29.418656764 +0000 UTC m=+0.093537769 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.openshift.expose-services=, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, vendor=Red Hat, Inc., version=9.4, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1214.1726694543, name=ubi9, vcs-type=git)
Feb 19 20:20:29 compute-0 podman[245219]: 2026-02-19 20:20:29.485904332 +0000 UTC m=+0.147751080 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 20:20:29 compute-0 podman[204724]: time="2026-02-19T20:20:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:20:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:20:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 19 20:20:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:20:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4369 "" "Go-http-client/1.1"
Feb 19 20:20:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:20:30.424 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e2fe6bb6-fad0-4563-8388-215a30f03e3f, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:20:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:20:30.431 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:20:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:20:30.431 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:20:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:20:30.432 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:20:30 compute-0 nova_compute[188777]: 2026-02-19 20:20:30.616 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:20:30 compute-0 nova_compute[188777]: 2026-02-19 20:20:30.973 188781 DEBUG oslo_concurrency.lockutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "1cda3ab8-0805-4bcd-955c-996994fd3cb4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:20:30 compute-0 nova_compute[188777]: 2026-02-19 20:20:30.974 188781 DEBUG oslo_concurrency.lockutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "1cda3ab8-0805-4bcd-955c-996994fd3cb4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:20:30 compute-0 nova_compute[188777]: 2026-02-19 20:20:30.997 188781 DEBUG nova.compute.manager [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 19 20:20:31 compute-0 nova_compute[188777]: 2026-02-19 20:20:31.154 188781 DEBUG oslo_concurrency.lockutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:20:31 compute-0 nova_compute[188777]: 2026-02-19 20:20:31.155 188781 DEBUG oslo_concurrency.lockutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:20:31 compute-0 nova_compute[188777]: 2026-02-19 20:20:31.174 188781 DEBUG nova.virt.hardware [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 19 20:20:31 compute-0 nova_compute[188777]: 2026-02-19 20:20:31.175 188781 INFO nova.compute.claims [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Claim successful on node compute-0.ctlplane.example.com
Feb 19 20:20:31 compute-0 openstack_network_exporter[207898]: ERROR   20:20:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:20:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:20:31 compute-0 openstack_network_exporter[207898]: ERROR   20:20:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:20:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:20:31 compute-0 nova_compute[188777]: 2026-02-19 20:20:31.660 188781 DEBUG nova.compute.provider_tree [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:20:31 compute-0 nova_compute[188777]: 2026-02-19 20:20:31.677 188781 DEBUG nova.scheduler.client.report [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:20:31 compute-0 nova_compute[188777]: 2026-02-19 20:20:31.704 188781 DEBUG oslo_concurrency.lockutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:20:31 compute-0 nova_compute[188777]: 2026-02-19 20:20:31.706 188781 DEBUG nova.compute.manager [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 19 20:20:31 compute-0 nova_compute[188777]: 2026-02-19 20:20:31.765 188781 DEBUG nova.compute.manager [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 19 20:20:31 compute-0 nova_compute[188777]: 2026-02-19 20:20:31.766 188781 DEBUG nova.network.neutron [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 19 20:20:31 compute-0 nova_compute[188777]: 2026-02-19 20:20:31.797 188781 INFO nova.virt.libvirt.driver [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 19 20:20:31 compute-0 nova_compute[188777]: 2026-02-19 20:20:31.841 188781 DEBUG nova.compute.manager [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 19 20:20:31 compute-0 nova_compute[188777]: 2026-02-19 20:20:31.986 188781 DEBUG nova.compute.manager [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 19 20:20:31 compute-0 nova_compute[188777]: 2026-02-19 20:20:31.988 188781 DEBUG nova.virt.libvirt.driver [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 19 20:20:31 compute-0 nova_compute[188777]: 2026-02-19 20:20:31.989 188781 INFO nova.virt.libvirt.driver [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Creating image(s)
Feb 19 20:20:31 compute-0 nova_compute[188777]: 2026-02-19 20:20:31.989 188781 DEBUG oslo_concurrency.lockutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "/var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:20:31 compute-0 nova_compute[188777]: 2026-02-19 20:20:31.990 188781 DEBUG oslo_concurrency.lockutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "/var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:20:31 compute-0 nova_compute[188777]: 2026-02-19 20:20:31.991 188781 DEBUG oslo_concurrency.lockutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "/var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:20:32 compute-0 nova_compute[188777]: 2026-02-19 20:20:32.010 188781 DEBUG oslo_concurrency.processutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:20:32 compute-0 nova_compute[188777]: 2026-02-19 20:20:32.063 188781 DEBUG oslo_concurrency.processutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:20:32 compute-0 nova_compute[188777]: 2026-02-19 20:20:32.064 188781 DEBUG oslo_concurrency.lockutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "ab3f72be2a6a58a25574f1d71543e651d74a575a" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:20:32 compute-0 nova_compute[188777]: 2026-02-19 20:20:32.065 188781 DEBUG oslo_concurrency.lockutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "ab3f72be2a6a58a25574f1d71543e651d74a575a" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:20:32 compute-0 nova_compute[188777]: 2026-02-19 20:20:32.083 188781 DEBUG oslo_concurrency.processutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:20:32 compute-0 nova_compute[188777]: 2026-02-19 20:20:32.142 188781 DEBUG oslo_concurrency.processutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:20:32 compute-0 nova_compute[188777]: 2026-02-19 20:20:32.144 188781 DEBUG oslo_concurrency.processutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a,backing_fmt=raw /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:20:32 compute-0 nova_compute[188777]: 2026-02-19 20:20:32.190 188781 DEBUG oslo_concurrency.processutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a,backing_fmt=raw /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk 1073741824" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:20:32 compute-0 nova_compute[188777]: 2026-02-19 20:20:32.192 188781 DEBUG oslo_concurrency.lockutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "ab3f72be2a6a58a25574f1d71543e651d74a575a" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:20:32 compute-0 nova_compute[188777]: 2026-02-19 20:20:32.192 188781 DEBUG oslo_concurrency.processutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:20:32 compute-0 nova_compute[188777]: 2026-02-19 20:20:32.238 188781 DEBUG oslo_concurrency.processutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:20:32 compute-0 nova_compute[188777]: 2026-02-19 20:20:32.240 188781 DEBUG nova.virt.disk.api [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Checking if we can resize image /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Feb 19 20:20:32 compute-0 nova_compute[188777]: 2026-02-19 20:20:32.240 188781 DEBUG oslo_concurrency.processutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:20:32 compute-0 nova_compute[188777]: 2026-02-19 20:20:32.287 188781 DEBUG oslo_concurrency.processutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:20:32 compute-0 nova_compute[188777]: 2026-02-19 20:20:32.288 188781 DEBUG nova.virt.disk.api [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Cannot resize image /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Feb 19 20:20:32 compute-0 nova_compute[188777]: 2026-02-19 20:20:32.289 188781 DEBUG nova.objects.instance [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lazy-loading 'migration_context' on Instance uuid 1cda3ab8-0805-4bcd-955c-996994fd3cb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:20:32 compute-0 nova_compute[188777]: 2026-02-19 20:20:32.305 188781 DEBUG oslo_concurrency.lockutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "/var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:20:32 compute-0 nova_compute[188777]: 2026-02-19 20:20:32.306 188781 DEBUG oslo_concurrency.lockutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "/var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:20:32 compute-0 nova_compute[188777]: 2026-02-19 20:20:32.307 188781 DEBUG oslo_concurrency.lockutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "/var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:20:32 compute-0 nova_compute[188777]: 2026-02-19 20:20:32.323 188781 DEBUG oslo_concurrency.processutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:20:32 compute-0 nova_compute[188777]: 2026-02-19 20:20:32.379 188781 DEBUG oslo_concurrency.processutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:20:32 compute-0 nova_compute[188777]: 2026-02-19 20:20:32.380 188781 DEBUG oslo_concurrency.lockutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:20:32 compute-0 nova_compute[188777]: 2026-02-19 20:20:32.381 188781 DEBUG oslo_concurrency.lockutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:20:32 compute-0 nova_compute[188777]: 2026-02-19 20:20:32.396 188781 DEBUG oslo_concurrency.processutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:20:32 compute-0 nova_compute[188777]: 2026-02-19 20:20:32.439 188781 DEBUG oslo_concurrency.processutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:20:32 compute-0 nova_compute[188777]: 2026-02-19 20:20:32.440 188781 DEBUG oslo_concurrency.processutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:20:32 compute-0 nova_compute[188777]: 2026-02-19 20:20:32.490 188781 DEBUG oslo_concurrency.processutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0 1073741824" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:20:32 compute-0 nova_compute[188777]: 2026-02-19 20:20:32.491 188781 DEBUG oslo_concurrency.lockutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:20:32 compute-0 nova_compute[188777]: 2026-02-19 20:20:32.492 188781 DEBUG oslo_concurrency.processutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:20:32 compute-0 nova_compute[188777]: 2026-02-19 20:20:32.543 188781 DEBUG oslo_concurrency.processutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:20:32 compute-0 nova_compute[188777]: 2026-02-19 20:20:32.544 188781 DEBUG nova.virt.libvirt.driver [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 19 20:20:32 compute-0 nova_compute[188777]: 2026-02-19 20:20:32.545 188781 DEBUG nova.virt.libvirt.driver [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Ensure instance console log exists: /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 19 20:20:32 compute-0 nova_compute[188777]: 2026-02-19 20:20:32.545 188781 DEBUG oslo_concurrency.lockutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:20:32 compute-0 nova_compute[188777]: 2026-02-19 20:20:32.546 188781 DEBUG oslo_concurrency.lockutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:20:32 compute-0 nova_compute[188777]: 2026-02-19 20:20:32.546 188781 DEBUG oslo_concurrency.lockutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:20:32 compute-0 nova_compute[188777]: 2026-02-19 20:20:32.826 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:20:33 compute-0 podman[245282]: 2026-02-19 20:20:33.386229463 +0000 UTC m=+0.068286832 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 19 20:20:35 compute-0 nova_compute[188777]: 2026-02-19 20:20:35.598 188781 DEBUG nova.network.neutron [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Successfully updated port: bbe0af68-c9d2-4b14-854b-b5355d9ef899 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 19 20:20:35 compute-0 nova_compute[188777]: 2026-02-19 20:20:35.619 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:20:35 compute-0 nova_compute[188777]: 2026-02-19 20:20:35.622 188781 DEBUG oslo_concurrency.lockutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "refresh_cache-1cda3ab8-0805-4bcd-955c-996994fd3cb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:20:35 compute-0 nova_compute[188777]: 2026-02-19 20:20:35.622 188781 DEBUG oslo_concurrency.lockutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquired lock "refresh_cache-1cda3ab8-0805-4bcd-955c-996994fd3cb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:20:35 compute-0 nova_compute[188777]: 2026-02-19 20:20:35.622 188781 DEBUG nova.network.neutron [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 19 20:20:35 compute-0 nova_compute[188777]: 2026-02-19 20:20:35.750 188781 DEBUG nova.compute.manager [req-4ae370db-f1ad-487b-b54b-f06927c9e63f req-994c9c1a-ac16-4af2-a66e-fad367dd8bfb 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Received event network-changed-bbe0af68-c9d2-4b14-854b-b5355d9ef899 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:20:35 compute-0 nova_compute[188777]: 2026-02-19 20:20:35.750 188781 DEBUG nova.compute.manager [req-4ae370db-f1ad-487b-b54b-f06927c9e63f req-994c9c1a-ac16-4af2-a66e-fad367dd8bfb 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Refreshing instance network info cache due to event network-changed-bbe0af68-c9d2-4b14-854b-b5355d9ef899. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 19 20:20:35 compute-0 nova_compute[188777]: 2026-02-19 20:20:35.751 188781 DEBUG oslo_concurrency.lockutils [req-4ae370db-f1ad-487b-b54b-f06927c9e63f req-994c9c1a-ac16-4af2-a66e-fad367dd8bfb 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "refresh_cache-1cda3ab8-0805-4bcd-955c-996994fd3cb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:20:36 compute-0 nova_compute[188777]: 2026-02-19 20:20:36.283 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:20:36 compute-0 nova_compute[188777]: 2026-02-19 20:20:36.283 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:20:36 compute-0 podman[245308]: 2026-02-19 20:20:36.414329412 +0000 UTC m=+0.093839469 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Feb 19 20:20:36 compute-0 nova_compute[188777]: 2026-02-19 20:20:36.416 188781 DEBUG nova.network.neutron [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 19 20:20:37 compute-0 nova_compute[188777]: 2026-02-19 20:20:37.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:20:37 compute-0 nova_compute[188777]: 2026-02-19 20:20:37.829 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:20:38 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.447 188781 DEBUG nova.network.neutron [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Updating instance_info_cache with network_info: [{"id": "bbe0af68-c9d2-4b14-854b-b5355d9ef899", "address": "fa:16:3e:2c:50:54", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.76", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbe0af68-c9", "ovs_interfaceid": "bbe0af68-c9d2-4b14-854b-b5355d9ef899", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.469 188781 DEBUG oslo_concurrency.lockutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Releasing lock "refresh_cache-1cda3ab8-0805-4bcd-955c-996994fd3cb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.471 188781 DEBUG nova.compute.manager [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Instance network_info: |[{"id": "bbe0af68-c9d2-4b14-854b-b5355d9ef899", "address": "fa:16:3e:2c:50:54", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.76", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbe0af68-c9", "ovs_interfaceid": "bbe0af68-c9d2-4b14-854b-b5355d9ef899", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.473 188781 DEBUG oslo_concurrency.lockutils [req-4ae370db-f1ad-487b-b54b-f06927c9e63f req-994c9c1a-ac16-4af2-a66e-fad367dd8bfb 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquired lock "refresh_cache-1cda3ab8-0805-4bcd-955c-996994fd3cb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.474 188781 DEBUG nova.network.neutron [req-4ae370db-f1ad-487b-b54b-f06927c9e63f req-994c9c1a-ac16-4af2-a66e-fad367dd8bfb 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Refreshing network info cache for port bbe0af68-c9d2-4b14-854b-b5355d9ef899 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.480 188781 DEBUG nova.virt.libvirt.driver [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Start _get_guest_xml network_info=[{"id": "bbe0af68-c9d2-4b14-854b-b5355d9ef899", "address": "fa:16:3e:2c:50:54", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.76", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbe0af68-c9", "ovs_interfaceid": "bbe0af68-c9d2-4b14-854b-b5355d9ef899", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-02-19T20:11:25Z,direct_url=<?>,disk_format='qcow2',id=e1a79c75-2fa3-410d-9c4c-91db3eeca51d,min_disk=0,min_ram=0,name='cirros',owner='59f01dee51a74ac1a9f82733f591827d',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-02-19T20:11:26Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'encryption_secret_uuid': None, 'image_id': 'e1a79c75-2fa3-410d-9c4c-91db3eeca51d'}], 'ephemerals': [{'device_name': '/dev/vdb', 'guest_format': None, 'size': 1, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'encryption_options': None, 'encryption_secret_uuid': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.493 188781 WARNING nova.virt.libvirt.driver [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.509 188781 DEBUG nova.virt.libvirt.host [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.510 188781 DEBUG nova.virt.libvirt.host [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.518 188781 DEBUG nova.virt.libvirt.host [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.519 188781 DEBUG nova.virt.libvirt.host [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.520 188781 DEBUG nova.virt.libvirt.driver [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.520 188781 DEBUG nova.virt.hardware [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-19T20:11:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='8030bc1a-9afb-4678-ac07-8b59a1275925',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-02-19T20:11:25Z,direct_url=<?>,disk_format='qcow2',id=e1a79c75-2fa3-410d-9c4c-91db3eeca51d,min_disk=0,min_ram=0,name='cirros',owner='59f01dee51a74ac1a9f82733f591827d',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-02-19T20:11:26Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.522 188781 DEBUG nova.virt.hardware [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.522 188781 DEBUG nova.virt.hardware [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.523 188781 DEBUG nova.virt.hardware [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.524 188781 DEBUG nova.virt.hardware [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.524 188781 DEBUG nova.virt.hardware [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.525 188781 DEBUG nova.virt.hardware [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.526 188781 DEBUG nova.virt.hardware [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.526 188781 DEBUG nova.virt.hardware [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.527 188781 DEBUG nova.virt.hardware [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.528 188781 DEBUG nova.virt.hardware [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.534 188781 DEBUG nova.virt.libvirt.vif [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-19T20:20:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-h4amqsx-jiq3zjubtpvr-5uw2ts4vboyi-vnf-jucboitrw5qp',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-h4amqsx-jiq3zjubtpvr-5uw2ts4vboyi-vnf-jucboitrw5qp',id=4,image_ref='e1a79c75-2fa3-410d-9c4c-91db3eeca51d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='78adc0ea-8772-4283-8bd6-6dbdcecee09e'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='59f01dee51a74ac1a9f82733f591827d',ramdisk_id='',reservation_id='r-1ppm1a12',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='e1a79c75-2fa3-410d-9c4c-91db3eeca51d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-19T20:20:31Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01ODI1MTk3MjE1ODEyOTk4NzM3PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTU4MjUxOTcyMTU4MTI5OTg3Mzc9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTgyNTE5NzIxNTgxMjk5ODczNz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTU4MjUxOTcyMTU4MTI5OTg3Mzc9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01ODI1MTk3MjE1ODEyOTk4NzM3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01ODI1MTk3MjE1ODEyOTk4NzM3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Feb 19 20:20:38 compute-0 nova_compute[188777]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTgyNTE5NzIxNTgxMjk5ODczNz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTU4MjUxOTcyMTU4MTI5OTg3Mzc9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01ODI1MTk3MjE1ODEyOTk4NzM3PT0tLQo=',user_id='9f5597a45dc34ee19bcfe938afde768f',uuid=1cda3ab8-0805-4bcd-955c-996994fd3cb4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bbe0af68-c9d2-4b14-854b-b5355d9ef899", "address": "fa:16:3e:2c:50:54", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.76", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbe0af68-c9", "ovs_interfaceid": "bbe0af68-c9d2-4b14-854b-b5355d9ef899", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.535 188781 DEBUG nova.network.os_vif_util [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Converting VIF {"id": "bbe0af68-c9d2-4b14-854b-b5355d9ef899", "address": "fa:16:3e:2c:50:54", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.76", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbe0af68-c9", "ovs_interfaceid": "bbe0af68-c9d2-4b14-854b-b5355d9ef899", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.537 188781 DEBUG nova.network.os_vif_util [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:50:54,bridge_name='br-int',has_traffic_filtering=True,id=bbe0af68-c9d2-4b14-854b-b5355d9ef899,network=Network(ec82c3b7-5389-43ab-a939-ce6cd12f9681),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbbe0af68-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.539 188781 DEBUG nova.objects.instance [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lazy-loading 'pci_devices' on Instance uuid 1cda3ab8-0805-4bcd-955c-996994fd3cb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.555 188781 DEBUG nova.virt.libvirt.driver [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] End _get_guest_xml xml=<domain type="kvm">
Feb 19 20:20:38 compute-0 nova_compute[188777]:   <uuid>1cda3ab8-0805-4bcd-955c-996994fd3cb4</uuid>
Feb 19 20:20:38 compute-0 nova_compute[188777]:   <name>instance-00000004</name>
Feb 19 20:20:38 compute-0 nova_compute[188777]:   <memory>524288</memory>
Feb 19 20:20:38 compute-0 nova_compute[188777]:   <vcpu>1</vcpu>
Feb 19 20:20:38 compute-0 nova_compute[188777]:   <metadata>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 19 20:20:38 compute-0 nova_compute[188777]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:       <nova:name>vn-h4amqsx-jiq3zjubtpvr-5uw2ts4vboyi-vnf-jucboitrw5qp</nova:name>
Feb 19 20:20:38 compute-0 nova_compute[188777]:       <nova:creationTime>2026-02-19 20:20:38</nova:creationTime>
Feb 19 20:20:38 compute-0 nova_compute[188777]:       <nova:flavor name="m1.small">
Feb 19 20:20:38 compute-0 nova_compute[188777]:         <nova:memory>512</nova:memory>
Feb 19 20:20:38 compute-0 nova_compute[188777]:         <nova:disk>1</nova:disk>
Feb 19 20:20:38 compute-0 nova_compute[188777]:         <nova:swap>0</nova:swap>
Feb 19 20:20:38 compute-0 nova_compute[188777]:         <nova:ephemeral>1</nova:ephemeral>
Feb 19 20:20:38 compute-0 nova_compute[188777]:         <nova:vcpus>1</nova:vcpus>
Feb 19 20:20:38 compute-0 nova_compute[188777]:       </nova:flavor>
Feb 19 20:20:38 compute-0 nova_compute[188777]:       <nova:owner>
Feb 19 20:20:38 compute-0 nova_compute[188777]:         <nova:user uuid="9f5597a45dc34ee19bcfe938afde768f">admin</nova:user>
Feb 19 20:20:38 compute-0 nova_compute[188777]:         <nova:project uuid="59f01dee51a74ac1a9f82733f591827d">admin</nova:project>
Feb 19 20:20:38 compute-0 nova_compute[188777]:       </nova:owner>
Feb 19 20:20:38 compute-0 nova_compute[188777]:       <nova:root type="image" uuid="e1a79c75-2fa3-410d-9c4c-91db3eeca51d"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:       <nova:ports>
Feb 19 20:20:38 compute-0 nova_compute[188777]:         <nova:port uuid="bbe0af68-c9d2-4b14-854b-b5355d9ef899">
Feb 19 20:20:38 compute-0 nova_compute[188777]:           <nova:ip type="fixed" address="192.168.0.76" ipVersion="4"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:         </nova:port>
Feb 19 20:20:38 compute-0 nova_compute[188777]:       </nova:ports>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     </nova:instance>
Feb 19 20:20:38 compute-0 nova_compute[188777]:   </metadata>
Feb 19 20:20:38 compute-0 nova_compute[188777]:   <sysinfo type="smbios">
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <system>
Feb 19 20:20:38 compute-0 nova_compute[188777]:       <entry name="manufacturer">RDO</entry>
Feb 19 20:20:38 compute-0 nova_compute[188777]:       <entry name="product">OpenStack Compute</entry>
Feb 19 20:20:38 compute-0 nova_compute[188777]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 19 20:20:38 compute-0 nova_compute[188777]:       <entry name="serial">1cda3ab8-0805-4bcd-955c-996994fd3cb4</entry>
Feb 19 20:20:38 compute-0 nova_compute[188777]:       <entry name="uuid">1cda3ab8-0805-4bcd-955c-996994fd3cb4</entry>
Feb 19 20:20:38 compute-0 nova_compute[188777]:       <entry name="family">Virtual Machine</entry>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     </system>
Feb 19 20:20:38 compute-0 nova_compute[188777]:   </sysinfo>
Feb 19 20:20:38 compute-0 nova_compute[188777]:   <os>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <boot dev="hd"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <smbios mode="sysinfo"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:   </os>
Feb 19 20:20:38 compute-0 nova_compute[188777]:   <features>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <acpi/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <apic/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <vmcoreinfo/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:   </features>
Feb 19 20:20:38 compute-0 nova_compute[188777]:   <clock offset="utc">
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <timer name="pit" tickpolicy="delay"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <timer name="hpet" present="no"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:   </clock>
Feb 19 20:20:38 compute-0 nova_compute[188777]:   <cpu mode="host-model" match="exact">
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <topology sockets="1" cores="1" threads="1"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:   </cpu>
Feb 19 20:20:38 compute-0 nova_compute[188777]:   <devices>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <disk type="file" device="disk">
Feb 19 20:20:38 compute-0 nova_compute[188777]:       <driver name="qemu" type="qcow2" cache="none"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:       <source file="/var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:       <target dev="vda" bus="virtio"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     </disk>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <disk type="file" device="disk">
Feb 19 20:20:38 compute-0 nova_compute[188777]:       <driver name="qemu" type="qcow2" cache="none"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:       <source file="/var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:       <target dev="vdb" bus="virtio"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     </disk>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <disk type="file" device="cdrom">
Feb 19 20:20:38 compute-0 nova_compute[188777]:       <driver name="qemu" type="raw" cache="none"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:       <source file="/var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.config"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:       <target dev="sda" bus="sata"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     </disk>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <interface type="ethernet">
Feb 19 20:20:38 compute-0 nova_compute[188777]:       <mac address="fa:16:3e:2c:50:54"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:       <model type="virtio"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:       <driver name="vhost" rx_queue_size="512"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:       <mtu size="1442"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:       <target dev="tapbbe0af68-c9"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     </interface>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <serial type="pty">
Feb 19 20:20:38 compute-0 nova_compute[188777]:       <log file="/var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/console.log" append="off"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     </serial>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <video>
Feb 19 20:20:38 compute-0 nova_compute[188777]:       <model type="virtio"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     </video>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <input type="tablet" bus="usb"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <rng model="virtio">
Feb 19 20:20:38 compute-0 nova_compute[188777]:       <backend model="random">/dev/urandom</backend>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     </rng>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <controller type="usb" index="0"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     <memballoon model="virtio">
Feb 19 20:20:38 compute-0 nova_compute[188777]:       <stats period="10"/>
Feb 19 20:20:38 compute-0 nova_compute[188777]:     </memballoon>
Feb 19 20:20:38 compute-0 nova_compute[188777]:   </devices>
Feb 19 20:20:38 compute-0 nova_compute[188777]: </domain>
Feb 19 20:20:38 compute-0 nova_compute[188777]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.557 188781 DEBUG nova.compute.manager [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Preparing to wait for external event network-vif-plugged-bbe0af68-c9d2-4b14-854b-b5355d9ef899 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.558 188781 DEBUG oslo_concurrency.lockutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "1cda3ab8-0805-4bcd-955c-996994fd3cb4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.558 188781 DEBUG oslo_concurrency.lockutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "1cda3ab8-0805-4bcd-955c-996994fd3cb4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.559 188781 DEBUG oslo_concurrency.lockutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "1cda3ab8-0805-4bcd-955c-996994fd3cb4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.560 188781 DEBUG nova.virt.libvirt.vif [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-19T20:20:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-h4amqsx-jiq3zjubtpvr-5uw2ts4vboyi-vnf-jucboitrw5qp',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-h4amqsx-jiq3zjubtpvr-5uw2ts4vboyi-vnf-jucboitrw5qp',id=4,image_ref='e1a79c75-2fa3-410d-9c4c-91db3eeca51d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='78adc0ea-8772-4283-8bd6-6dbdcecee09e'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='59f01dee51a74ac1a9f82733f591827d',ramdisk_id='',reservation_id='r-1ppm1a12',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='e1a79c75-2fa3-410d-9c4c-91db3eeca51d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-19T20:20:31Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01ODI1MTk3MjE1ODEyOTk4NzM3PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTU4MjUxOTcyMTU4MTI5OTg3Mzc9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTgyNTE5NzIxNTgxMjk5ODczNz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTU4MjUxOTcyMTU4MTI5OTg3Mzc9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01ODI1MTk3MjE1ODEyOTk4NzM3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01ODI1MTk3MjE1ODEyOTk4NzM3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Feb 19 20:20:38 compute-0 nova_compute[188777]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTgyNTE5NzIxNTgxMjk5ODczNz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTU4MjUxOTcyMTU4MTI5OTg3Mzc9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01ODI1MTk3MjE1ODEyOTk4NzM3PT0tLQo=',user_id='9f5597a45dc34ee19bcfe938afde768f',uuid=1cda3ab8-0805-4bcd-955c-996994fd3cb4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bbe0af68-c9d2-4b14-854b-b5355d9ef899", "address": "fa:16:3e:2c:50:54", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.76", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbe0af68-c9", "ovs_interfaceid": "bbe0af68-c9d2-4b14-854b-b5355d9ef899", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.561 188781 DEBUG nova.network.os_vif_util [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Converting VIF {"id": "bbe0af68-c9d2-4b14-854b-b5355d9ef899", "address": "fa:16:3e:2c:50:54", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.76", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbe0af68-c9", "ovs_interfaceid": "bbe0af68-c9d2-4b14-854b-b5355d9ef899", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.562 188781 DEBUG nova.network.os_vif_util [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:50:54,bridge_name='br-int',has_traffic_filtering=True,id=bbe0af68-c9d2-4b14-854b-b5355d9ef899,network=Network(ec82c3b7-5389-43ab-a939-ce6cd12f9681),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbbe0af68-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.563 188781 DEBUG os_vif [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:50:54,bridge_name='br-int',has_traffic_filtering=True,id=bbe0af68-c9d2-4b14-854b-b5355d9ef899,network=Network(ec82c3b7-5389-43ab-a939-ce6cd12f9681),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbbe0af68-c9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.564 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.564 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.565 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.570 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.570 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbbe0af68-c9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.571 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbbe0af68-c9, col_values=(('external_ids', {'iface-id': 'bbe0af68-c9d2-4b14-854b-b5355d9ef899', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2c:50:54', 'vm-uuid': '1cda3ab8-0805-4bcd-955c-996994fd3cb4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.574 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:20:38 compute-0 NetworkManager[57033]: <info>  [1771532438.5754] manager: (tapbbe0af68-c9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.576 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.583 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.585 188781 INFO os_vif [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:50:54,bridge_name='br-int',has_traffic_filtering=True,id=bbe0af68-c9d2-4b14-854b-b5355d9ef899,network=Network(ec82c3b7-5389-43ab-a939-ce6cd12f9681),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbbe0af68-c9')
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.645 188781 DEBUG nova.virt.libvirt.driver [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.646 188781 DEBUG nova.virt.libvirt.driver [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.646 188781 DEBUG nova.virt.libvirt.driver [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.646 188781 DEBUG nova.virt.libvirt.driver [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] No VIF found with MAC fa:16:3e:2c:50:54, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 19 20:20:38 compute-0 nova_compute[188777]: 2026-02-19 20:20:38.647 188781 INFO nova.virt.libvirt.driver [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Using config drive
Feb 19 20:20:38 compute-0 rsyslogd[239379]: message too long (8192) with configured size 8096, begin of message is: 2026-02-19 20:20:38.534 188781 DEBUG nova.virt.libvirt.vif [None req-a0581f31-13 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 19 20:20:38 compute-0 rsyslogd[239379]: message too long (8192) with configured size 8096, begin of message is: 2026-02-19 20:20:38.560 188781 DEBUG nova.virt.libvirt.vif [None req-a0581f31-13 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 19 20:20:39 compute-0 nova_compute[188777]: 2026-02-19 20:20:39.259 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:20:39 compute-0 nova_compute[188777]: 2026-02-19 20:20:39.402 188781 INFO nova.virt.libvirt.driver [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Creating config drive at /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.config
Feb 19 20:20:39 compute-0 nova_compute[188777]: 2026-02-19 20:20:39.409 188781 DEBUG oslo_concurrency.processutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmps4g6air8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:20:39 compute-0 nova_compute[188777]: 2026-02-19 20:20:39.527 188781 DEBUG oslo_concurrency.processutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmps4g6air8" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:20:39 compute-0 NetworkManager[57033]: <info>  [1771532439.5907] manager: (tapbbe0af68-c9): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Feb 19 20:20:39 compute-0 kernel: tapbbe0af68-c9: entered promiscuous mode
Feb 19 20:20:39 compute-0 nova_compute[188777]: 2026-02-19 20:20:39.593 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:20:39 compute-0 ovn_controller[98843]: 2026-02-19T20:20:39Z|00045|binding|INFO|Claiming lport bbe0af68-c9d2-4b14-854b-b5355d9ef899 for this chassis.
Feb 19 20:20:39 compute-0 ovn_controller[98843]: 2026-02-19T20:20:39Z|00046|binding|INFO|bbe0af68-c9d2-4b14-854b-b5355d9ef899: Claiming fa:16:3e:2c:50:54 192.168.0.76
Feb 19 20:20:39 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:20:39.605 108175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:50:54 192.168.0.76'], port_security=['fa:16:3e:2c:50:54 192.168.0.76'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-5pf5gh4amqsx-jiq3zjubtpvr-5uw2ts4vboyi-port-pfpt4fxi2gjn', 'neutron:cidrs': '192.168.0.76/24', 'neutron:device_id': '1cda3ab8-0805-4bcd-955c-996994fd3cb4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec82c3b7-5389-43ab-a939-ce6cd12f9681', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-5pf5gh4amqsx-jiq3zjubtpvr-5uw2ts4vboyi-port-pfpt4fxi2gjn', 'neutron:project_id': '59f01dee51a74ac1a9f82733f591827d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '46d7cf50-a73c-415e-96c4-398ffee7ce2d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.174'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61958255-2fb8-4c55-809a-ee04d4cf034a, chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>], logical_port=bbe0af68-c9d2-4b14-854b-b5355d9ef899) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 19 20:20:39 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:20:39.606 108175 INFO neutron.agent.ovn.metadata.agent [-] Port bbe0af68-c9d2-4b14-854b-b5355d9ef899 in datapath ec82c3b7-5389-43ab-a939-ce6cd12f9681 bound to our chassis
Feb 19 20:20:39 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:20:39.607 108175 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec82c3b7-5389-43ab-a939-ce6cd12f9681
Feb 19 20:20:39 compute-0 nova_compute[188777]: 2026-02-19 20:20:39.611 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:20:39 compute-0 nova_compute[188777]: 2026-02-19 20:20:39.613 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:20:39 compute-0 ovn_controller[98843]: 2026-02-19T20:20:39Z|00047|binding|INFO|Setting lport bbe0af68-c9d2-4b14-854b-b5355d9ef899 up in Southbound
Feb 19 20:20:39 compute-0 ovn_controller[98843]: 2026-02-19T20:20:39Z|00048|binding|INFO|Setting lport bbe0af68-c9d2-4b14-854b-b5355d9ef899 ovn-installed in OVS
Feb 19 20:20:39 compute-0 nova_compute[188777]: 2026-02-19 20:20:39.622 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:20:39 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:20:39.623 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[cd26a54c-aee7-42f4-b19a-c536db19deb6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:20:39 compute-0 systemd-udevd[245363]: Network interface NamePolicy= disabled on kernel command line.
Feb 19 20:20:39 compute-0 systemd-machined[158158]: New machine qemu-4-instance-00000004.
Feb 19 20:20:39 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Feb 19 20:20:39 compute-0 NetworkManager[57033]: <info>  [1771532439.6447] device (tapbbe0af68-c9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 19 20:20:39 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:20:39.651 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[39e8a889-2c99-4e92-bda8-6e362e1db148]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:20:39 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:20:39.654 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[cf3b4062-ee51-4178-8a88-3e99c80d07a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:20:39 compute-0 NetworkManager[57033]: <info>  [1771532439.6547] device (tapbbe0af68-c9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 19 20:20:39 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:20:39.677 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[f08218e8-e851-47e7-bf6e-2dd750c223f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:20:39 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:20:39.689 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[7dc51427-0786-4d1b-aaad-a3b67c712133]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec82c3b7-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8a:e7:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 10, 'rx_bytes': 616, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 10, 'rx_bytes': 616, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 348344, 'reachable_time': 39058, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245378, 'error': None, 'target': 'ovnmeta-ec82c3b7-5389-43ab-a939-ce6cd12f9681', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:20:39 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:20:39.701 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[c1752d99-fa9d-4a8c-ac44-fdd1a84bf583]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapec82c3b7-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 348361, 'tstamp': 348361}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245385, 'error': None, 'target': 'ovnmeta-ec82c3b7-5389-43ab-a939-ce6cd12f9681', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapec82c3b7-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 348365, 'tstamp': 348365}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245385, 'error': None, 'target': 'ovnmeta-ec82c3b7-5389-43ab-a939-ce6cd12f9681', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:20:39 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:20:39.703 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec82c3b7-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:20:39 compute-0 nova_compute[188777]: 2026-02-19 20:20:39.705 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:20:39 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:20:39.706 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec82c3b7-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:20:39 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:20:39.706 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 19 20:20:39 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:20:39.707 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec82c3b7-50, col_values=(('external_ids', {'iface-id': 'a1c774de-4b7d-47b5-b88c-3f5d9b5c3dce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:20:39 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:20:39.708 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 19 20:20:39 compute-0 podman[245343]: 2026-02-19 20:20:39.721311232 +0000 UTC m=+0.141604229 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127)
Feb 19 20:20:39 compute-0 nova_compute[188777]: 2026-02-19 20:20:39.846 188781 DEBUG nova.compute.manager [req-c9f4fc95-7ede-465b-b29c-d329de974a34 req-b8ee5ccd-105f-413e-96af-eca3e62d5cc0 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Received event network-vif-plugged-bbe0af68-c9d2-4b14-854b-b5355d9ef899 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:20:39 compute-0 nova_compute[188777]: 2026-02-19 20:20:39.846 188781 DEBUG oslo_concurrency.lockutils [req-c9f4fc95-7ede-465b-b29c-d329de974a34 req-b8ee5ccd-105f-413e-96af-eca3e62d5cc0 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "1cda3ab8-0805-4bcd-955c-996994fd3cb4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:20:39 compute-0 nova_compute[188777]: 2026-02-19 20:20:39.847 188781 DEBUG oslo_concurrency.lockutils [req-c9f4fc95-7ede-465b-b29c-d329de974a34 req-b8ee5ccd-105f-413e-96af-eca3e62d5cc0 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "1cda3ab8-0805-4bcd-955c-996994fd3cb4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:20:39 compute-0 nova_compute[188777]: 2026-02-19 20:20:39.847 188781 DEBUG oslo_concurrency.lockutils [req-c9f4fc95-7ede-465b-b29c-d329de974a34 req-b8ee5ccd-105f-413e-96af-eca3e62d5cc0 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "1cda3ab8-0805-4bcd-955c-996994fd3cb4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:20:39 compute-0 nova_compute[188777]: 2026-02-19 20:20:39.847 188781 DEBUG nova.compute.manager [req-c9f4fc95-7ede-465b-b29c-d329de974a34 req-b8ee5ccd-105f-413e-96af-eca3e62d5cc0 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Processing event network-vif-plugged-bbe0af68-c9d2-4b14-854b-b5355d9ef899 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 19 20:20:40 compute-0 nova_compute[188777]: 2026-02-19 20:20:40.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:20:40 compute-0 nova_compute[188777]: 2026-02-19 20:20:40.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:20:40 compute-0 nova_compute[188777]: 2026-02-19 20:20:40.661 188781 DEBUG nova.network.neutron [req-4ae370db-f1ad-487b-b54b-f06927c9e63f req-994c9c1a-ac16-4af2-a66e-fad367dd8bfb 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Updated VIF entry in instance network info cache for port bbe0af68-c9d2-4b14-854b-b5355d9ef899. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 19 20:20:40 compute-0 nova_compute[188777]: 2026-02-19 20:20:40.661 188781 DEBUG nova.network.neutron [req-4ae370db-f1ad-487b-b54b-f06927c9e63f req-994c9c1a-ac16-4af2-a66e-fad367dd8bfb 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Updating instance_info_cache with network_info: [{"id": "bbe0af68-c9d2-4b14-854b-b5355d9ef899", "address": "fa:16:3e:2c:50:54", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.76", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbe0af68-c9", "ovs_interfaceid": "bbe0af68-c9d2-4b14-854b-b5355d9ef899", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:20:40 compute-0 nova_compute[188777]: 2026-02-19 20:20:40.678 188781 DEBUG oslo_concurrency.lockutils [req-4ae370db-f1ad-487b-b54b-f06927c9e63f req-994c9c1a-ac16-4af2-a66e-fad367dd8bfb 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Releasing lock "refresh_cache-1cda3ab8-0805-4bcd-955c-996994fd3cb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:20:40 compute-0 nova_compute[188777]: 2026-02-19 20:20:40.698 188781 DEBUG nova.compute.manager [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 19 20:20:40 compute-0 nova_compute[188777]: 2026-02-19 20:20:40.699 188781 DEBUG nova.virt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Emitting event <LifecycleEvent: 1771532440.6977184, 1cda3ab8-0805-4bcd-955c-996994fd3cb4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:20:40 compute-0 nova_compute[188777]: 2026-02-19 20:20:40.700 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] VM Started (Lifecycle Event)
Feb 19 20:20:40 compute-0 nova_compute[188777]: 2026-02-19 20:20:40.704 188781 DEBUG nova.virt.libvirt.driver [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 19 20:20:40 compute-0 nova_compute[188777]: 2026-02-19 20:20:40.707 188781 INFO nova.virt.libvirt.driver [-] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Instance spawned successfully.
Feb 19 20:20:40 compute-0 nova_compute[188777]: 2026-02-19 20:20:40.708 188781 DEBUG nova.virt.libvirt.driver [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 19 20:20:40 compute-0 nova_compute[188777]: 2026-02-19 20:20:40.725 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:20:40 compute-0 nova_compute[188777]: 2026-02-19 20:20:40.734 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 19 20:20:40 compute-0 nova_compute[188777]: 2026-02-19 20:20:40.740 188781 DEBUG nova.virt.libvirt.driver [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:20:40 compute-0 nova_compute[188777]: 2026-02-19 20:20:40.740 188781 DEBUG nova.virt.libvirt.driver [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:20:40 compute-0 nova_compute[188777]: 2026-02-19 20:20:40.741 188781 DEBUG nova.virt.libvirt.driver [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:20:40 compute-0 nova_compute[188777]: 2026-02-19 20:20:40.742 188781 DEBUG nova.virt.libvirt.driver [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:20:40 compute-0 nova_compute[188777]: 2026-02-19 20:20:40.743 188781 DEBUG nova.virt.libvirt.driver [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:20:40 compute-0 nova_compute[188777]: 2026-02-19 20:20:40.743 188781 DEBUG nova.virt.libvirt.driver [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:20:40 compute-0 nova_compute[188777]: 2026-02-19 20:20:40.761 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 19 20:20:40 compute-0 nova_compute[188777]: 2026-02-19 20:20:40.762 188781 DEBUG nova.virt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Emitting event <LifecycleEvent: 1771532440.6978514, 1cda3ab8-0805-4bcd-955c-996994fd3cb4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:20:40 compute-0 nova_compute[188777]: 2026-02-19 20:20:40.762 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] VM Paused (Lifecycle Event)
Feb 19 20:20:40 compute-0 nova_compute[188777]: 2026-02-19 20:20:40.784 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:20:40 compute-0 nova_compute[188777]: 2026-02-19 20:20:40.791 188781 DEBUG nova.virt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Emitting event <LifecycleEvent: 1771532440.7048676, 1cda3ab8-0805-4bcd-955c-996994fd3cb4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:20:40 compute-0 nova_compute[188777]: 2026-02-19 20:20:40.791 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] VM Resumed (Lifecycle Event)
Feb 19 20:20:40 compute-0 nova_compute[188777]: 2026-02-19 20:20:40.810 188781 INFO nova.compute.manager [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Took 8.82 seconds to spawn the instance on the hypervisor.
Feb 19 20:20:40 compute-0 nova_compute[188777]: 2026-02-19 20:20:40.810 188781 DEBUG nova.compute.manager [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:20:40 compute-0 nova_compute[188777]: 2026-02-19 20:20:40.812 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:20:40 compute-0 nova_compute[188777]: 2026-02-19 20:20:40.821 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 19 20:20:40 compute-0 nova_compute[188777]: 2026-02-19 20:20:40.846 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 19 20:20:40 compute-0 nova_compute[188777]: 2026-02-19 20:20:40.867 188781 INFO nova.compute.manager [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Took 9.77 seconds to build instance.
Feb 19 20:20:40 compute-0 nova_compute[188777]: 2026-02-19 20:20:40.881 188781 DEBUG oslo_concurrency.lockutils [None req-a0581f31-13c8-4d7b-b918-72cb75913788 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "1cda3ab8-0805-4bcd-955c-996994fd3cb4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.907s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:20:41 compute-0 systemd[1]: Starting libvirt proxy daemon...
Feb 19 20:20:41 compute-0 systemd[1]: Started libvirt proxy daemon.
Feb 19 20:20:41 compute-0 nova_compute[188777]: 2026-02-19 20:20:41.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:20:41 compute-0 nova_compute[188777]: 2026-02-19 20:20:41.264 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:20:41 compute-0 nova_compute[188777]: 2026-02-19 20:20:41.936 188781 DEBUG nova.compute.manager [req-d5b57c7e-65d2-4e0b-bdb9-4bb089fb668e req-f85b9466-3c06-4e65-9f78-91eca7ab1bb1 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Received event network-vif-plugged-bbe0af68-c9d2-4b14-854b-b5355d9ef899 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:20:41 compute-0 nova_compute[188777]: 2026-02-19 20:20:41.937 188781 DEBUG oslo_concurrency.lockutils [req-d5b57c7e-65d2-4e0b-bdb9-4bb089fb668e req-f85b9466-3c06-4e65-9f78-91eca7ab1bb1 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "1cda3ab8-0805-4bcd-955c-996994fd3cb4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:20:41 compute-0 nova_compute[188777]: 2026-02-19 20:20:41.938 188781 DEBUG oslo_concurrency.lockutils [req-d5b57c7e-65d2-4e0b-bdb9-4bb089fb668e req-f85b9466-3c06-4e65-9f78-91eca7ab1bb1 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "1cda3ab8-0805-4bcd-955c-996994fd3cb4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:20:41 compute-0 nova_compute[188777]: 2026-02-19 20:20:41.938 188781 DEBUG oslo_concurrency.lockutils [req-d5b57c7e-65d2-4e0b-bdb9-4bb089fb668e req-f85b9466-3c06-4e65-9f78-91eca7ab1bb1 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "1cda3ab8-0805-4bcd-955c-996994fd3cb4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:20:41 compute-0 nova_compute[188777]: 2026-02-19 20:20:41.939 188781 DEBUG nova.compute.manager [req-d5b57c7e-65d2-4e0b-bdb9-4bb089fb668e req-f85b9466-3c06-4e65-9f78-91eca7ab1bb1 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] No waiting events found dispatching network-vif-plugged-bbe0af68-c9d2-4b14-854b-b5355d9ef899 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 19 20:20:41 compute-0 nova_compute[188777]: 2026-02-19 20:20:41.939 188781 WARNING nova.compute.manager [req-d5b57c7e-65d2-4e0b-bdb9-4bb089fb668e req-f85b9466-3c06-4e65-9f78-91eca7ab1bb1 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Received unexpected event network-vif-plugged-bbe0af68-c9d2-4b14-854b-b5355d9ef899 for instance with vm_state active and task_state None.
Feb 19 20:20:42 compute-0 nova_compute[188777]: 2026-02-19 20:20:42.833 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:20:43 compute-0 nova_compute[188777]: 2026-02-19 20:20:43.423 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "refresh_cache-14ed9fe0-b150-4bd8-852e-7f2f62d4374b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:20:43 compute-0 nova_compute[188777]: 2026-02-19 20:20:43.423 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquired lock "refresh_cache-14ed9fe0-b150-4bd8-852e-7f2f62d4374b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:20:43 compute-0 nova_compute[188777]: 2026-02-19 20:20:43.423 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 19 20:20:43 compute-0 nova_compute[188777]: 2026-02-19 20:20:43.575 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:20:47 compute-0 nova_compute[188777]: 2026-02-19 20:20:47.757 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Updating instance_info_cache with network_info: [{"id": "9838caff-8a65-491d-8b0d-3fb3d10c299c", "address": "fa:16:3e:9c:8b:13", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.86", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9838caff-8a", "ovs_interfaceid": "9838caff-8a65-491d-8b0d-3fb3d10c299c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:20:47 compute-0 nova_compute[188777]: 2026-02-19 20:20:47.780 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Releasing lock "refresh_cache-14ed9fe0-b150-4bd8-852e-7f2f62d4374b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:20:47 compute-0 nova_compute[188777]: 2026-02-19 20:20:47.780 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 19 20:20:47 compute-0 nova_compute[188777]: 2026-02-19 20:20:47.780 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:20:47 compute-0 nova_compute[188777]: 2026-02-19 20:20:47.780 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:20:47 compute-0 nova_compute[188777]: 2026-02-19 20:20:47.781 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:20:47 compute-0 nova_compute[188777]: 2026-02-19 20:20:47.804 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:20:47 compute-0 nova_compute[188777]: 2026-02-19 20:20:47.804 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:20:47 compute-0 nova_compute[188777]: 2026-02-19 20:20:47.805 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:20:47 compute-0 nova_compute[188777]: 2026-02-19 20:20:47.805 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:20:47 compute-0 nova_compute[188777]: 2026-02-19 20:20:47.835 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:20:47 compute-0 nova_compute[188777]: 2026-02-19 20:20:47.932 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:20:48 compute-0 nova_compute[188777]: 2026-02-19 20:20:48.009 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:20:48 compute-0 nova_compute[188777]: 2026-02-19 20:20:48.010 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:20:48 compute-0 nova_compute[188777]: 2026-02-19 20:20:48.083 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:20:48 compute-0 nova_compute[188777]: 2026-02-19 20:20:48.084 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:20:48 compute-0 nova_compute[188777]: 2026-02-19 20:20:48.170 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:20:48 compute-0 nova_compute[188777]: 2026-02-19 20:20:48.171 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:20:48 compute-0 nova_compute[188777]: 2026-02-19 20:20:48.265 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:20:48 compute-0 nova_compute[188777]: 2026-02-19 20:20:48.276 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:20:48 compute-0 nova_compute[188777]: 2026-02-19 20:20:48.364 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:20:48 compute-0 nova_compute[188777]: 2026-02-19 20:20:48.366 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:20:48 compute-0 nova_compute[188777]: 2026-02-19 20:20:48.434 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:20:48 compute-0 nova_compute[188777]: 2026-02-19 20:20:48.435 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:20:48 compute-0 nova_compute[188777]: 2026-02-19 20:20:48.509 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.eph0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:20:48 compute-0 nova_compute[188777]: 2026-02-19 20:20:48.510 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:20:48 compute-0 nova_compute[188777]: 2026-02-19 20:20:48.578 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:20:48 compute-0 nova_compute[188777]: 2026-02-19 20:20:48.595 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.eph0 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:20:48 compute-0 nova_compute[188777]: 2026-02-19 20:20:48.603 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:20:48 compute-0 nova_compute[188777]: 2026-02-19 20:20:48.659 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:20:48 compute-0 nova_compute[188777]: 2026-02-19 20:20:48.661 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:20:48 compute-0 nova_compute[188777]: 2026-02-19 20:20:48.718 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:20:48 compute-0 nova_compute[188777]: 2026-02-19 20:20:48.720 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:20:48 compute-0 nova_compute[188777]: 2026-02-19 20:20:48.797 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:20:48 compute-0 nova_compute[188777]: 2026-02-19 20:20:48.798 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:20:48 compute-0 nova_compute[188777]: 2026-02-19 20:20:48.849 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0 --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:20:48 compute-0 nova_compute[188777]: 2026-02-19 20:20:48.857 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:20:48 compute-0 nova_compute[188777]: 2026-02-19 20:20:48.918 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:20:48 compute-0 nova_compute[188777]: 2026-02-19 20:20:48.919 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:20:48 compute-0 nova_compute[188777]: 2026-02-19 20:20:48.974 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:20:48 compute-0 nova_compute[188777]: 2026-02-19 20:20:48.975 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:20:49 compute-0 nova_compute[188777]: 2026-02-19 20:20:49.031 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:20:49 compute-0 nova_compute[188777]: 2026-02-19 20:20:49.034 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:20:49 compute-0 nova_compute[188777]: 2026-02-19 20:20:49.107 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:20:49 compute-0 nova_compute[188777]: 2026-02-19 20:20:49.468 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:20:49 compute-0 nova_compute[188777]: 2026-02-19 20:20:49.470 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4767MB free_disk=72.20292282104492GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:20:49 compute-0 nova_compute[188777]: 2026-02-19 20:20:49.470 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:20:49 compute-0 nova_compute[188777]: 2026-02-19 20:20:49.470 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:20:49 compute-0 nova_compute[188777]: 2026-02-19 20:20:49.565 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 5aaac42d-946d-4c6f-9bde-23b8b6613b59 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:20:49 compute-0 nova_compute[188777]: 2026-02-19 20:20:49.566 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 0975826c-6016-48c8-a7dd-1b10a32f91ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:20:49 compute-0 nova_compute[188777]: 2026-02-19 20:20:49.566 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 14ed9fe0-b150-4bd8-852e-7f2f62d4374b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:20:49 compute-0 nova_compute[188777]: 2026-02-19 20:20:49.566 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 1cda3ab8-0805-4bcd-955c-996994fd3cb4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:20:49 compute-0 nova_compute[188777]: 2026-02-19 20:20:49.567 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:20:49 compute-0 nova_compute[188777]: 2026-02-19 20:20:49.567 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:20:49 compute-0 nova_compute[188777]: 2026-02-19 20:20:49.681 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:20:49 compute-0 nova_compute[188777]: 2026-02-19 20:20:49.702 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:20:49 compute-0 nova_compute[188777]: 2026-02-19 20:20:49.737 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:20:49 compute-0 nova_compute[188777]: 2026-02-19 20:20:49.738 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.268s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:20:51 compute-0 nova_compute[188777]: 2026-02-19 20:20:51.734 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:20:52 compute-0 podman[245470]: 2026-02-19 20:20:52.405868309 +0000 UTC m=+0.084951740 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 19 20:20:52 compute-0 podman[245469]: 2026-02-19 20:20:52.437687992 +0000 UTC m=+0.113631655 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1770267347, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., version=9.7, config_id=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2026-02-05T04:57:10Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 19 20:20:52 compute-0 nova_compute[188777]: 2026-02-19 20:20:52.837 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:20:53 compute-0 nova_compute[188777]: 2026-02-19 20:20:53.583 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:20:57 compute-0 podman[245512]: 2026-02-19 20:20:57.464667551 +0000 UTC m=+0.131531954 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 19 20:20:57 compute-0 nova_compute[188777]: 2026-02-19 20:20:57.844 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:20:58 compute-0 nova_compute[188777]: 2026-02-19 20:20:58.587 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:20:59 compute-0 sshd-session[245531]: Invalid user iksi from 125.94.106.195 port 57772
Feb 19 20:20:59 compute-0 sshd-session[245531]: Received disconnect from 125.94.106.195 port 57772:11: Bye Bye [preauth]
Feb 19 20:20:59 compute-0 sshd-session[245531]: Disconnected from invalid user iksi 125.94.106.195 port 57772 [preauth]
Feb 19 20:20:59 compute-0 podman[204724]: time="2026-02-19T20:20:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:20:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:20:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 19 20:20:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:20:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4365 "" "Go-http-client/1.1"
Feb 19 20:21:00 compute-0 podman[245534]: 2026-02-19 20:21:00.437349723 +0000 UTC m=+0.109865289 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 19 20:21:00 compute-0 podman[245533]: 2026-02-19 20:21:00.439310224 +0000 UTC m=+0.114633977 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, release=1214.1726694543, vcs-type=git, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4)
Feb 19 20:21:01 compute-0 openstack_network_exporter[207898]: ERROR   20:21:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:21:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:21:01 compute-0 openstack_network_exporter[207898]: ERROR   20:21:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:21:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:21:02 compute-0 sshd-session[245572]: Invalid user ubuntu from 154.12.80.151 port 40742
Feb 19 20:21:02 compute-0 nova_compute[188777]: 2026-02-19 20:21:02.845 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:21:03 compute-0 sshd-session[245572]: Received disconnect from 154.12.80.151 port 40742:11: Bye Bye [preauth]
Feb 19 20:21:03 compute-0 sshd-session[245572]: Disconnected from invalid user ubuntu 154.12.80.151 port 40742 [preauth]
Feb 19 20:21:03 compute-0 nova_compute[188777]: 2026-02-19 20:21:03.591 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:21:04 compute-0 podman[245574]: 2026-02-19 20:21:04.405033315 +0000 UTC m=+0.086532981 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 19 20:21:07 compute-0 podman[245596]: 2026-02-19 20:21:07.413801222 +0000 UTC m=+0.103349776 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260216, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:21:07 compute-0 nova_compute[188777]: 2026-02-19 20:21:07.848 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:21:08 compute-0 nova_compute[188777]: 2026-02-19 20:21:08.594 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:21:09 compute-0 ovn_controller[98843]: 2026-02-19T20:21:09Z|00049|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory
Feb 19 20:21:10 compute-0 podman[245617]: 2026-02-19 20:21:10.452262884 +0000 UTC m=+0.122202223 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller)
Feb 19 20:21:12 compute-0 nova_compute[188777]: 2026-02-19 20:21:12.850 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:21:13 compute-0 nova_compute[188777]: 2026-02-19 20:21:13.597 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:21:15 compute-0 ovn_controller[98843]: 2026-02-19T20:21:15Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2c:50:54 192.168.0.76
Feb 19 20:21:15 compute-0 ovn_controller[98843]: 2026-02-19T20:21:15Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2c:50:54 192.168.0.76
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.139 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.139 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.140 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.140 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fa4f6728800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.141 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a390>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.143 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.143 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.143 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.143 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.143 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.143 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.143 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.143 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.144 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.144 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.145 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.145 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f6757860>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.146 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '5aaac42d-946d-4c6f-9bde-23b8b6613b59', 'name': 'test_0', 'flavor': {'id': '8030bc1a-9afb-4678-ac07-8b59a1275925', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e1a79c75-2fa3-410d-9c4c-91db3eeca51d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '59f01dee51a74ac1a9f82733f591827d', 'user_id': '9f5597a45dc34ee19bcfe938afde768f', 'hostId': 'fd9f80e206ee2256ddb900effab6d3e51f96886da6d1a8f886ddbab7', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.149 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '14ed9fe0-b150-4bd8-852e-7f2f62d4374b', 'name': 'vn-h4amqsx-zdyrztqs2ra5-eeiurm4z7i6z-vnf-hs7qdifsqkdp', 'flavor': {'id': '8030bc1a-9afb-4678-ac07-8b59a1275925', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e1a79c75-2fa3-410d-9c4c-91db3eeca51d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '59f01dee51a74ac1a9f82733f591827d', 'user_id': '9f5597a45dc34ee19bcfe938afde768f', 'hostId': 'fd9f80e206ee2256ddb900effab6d3e51f96886da6d1a8f886ddbab7', 'status': 'active', 'metadata': {'metering.server_group': '78adc0ea-8772-4283-8bd6-6dbdcecee09e'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.152 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 1cda3ab8-0805-4bcd-955c-996994fd3cb4 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.152 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/1cda3ab8-0805-4bcd-955c-996994fd3cb4 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}eb82bb0a04ff18fe5ce8169193b61d179e0542ea510a5cad5008c259e31f58a8" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.586 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Thu, 19 Feb 2026 20:21:15 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-4e27d7dc-d3ad-4e38-afeb-160e2c146650 x-openstack-request-id: req-4e27d7dc-d3ad-4e38-afeb-160e2c146650 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.586 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "1cda3ab8-0805-4bcd-955c-996994fd3cb4", "name": "vn-h4amqsx-jiq3zjubtpvr-5uw2ts4vboyi-vnf-jucboitrw5qp", "status": "ACTIVE", "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "user_id": "9f5597a45dc34ee19bcfe938afde768f", "metadata": {"metering.server_group": "78adc0ea-8772-4283-8bd6-6dbdcecee09e"}, "hostId": "fd9f80e206ee2256ddb900effab6d3e51f96886da6d1a8f886ddbab7", "image": {"id": "e1a79c75-2fa3-410d-9c4c-91db3eeca51d", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/e1a79c75-2fa3-410d-9c4c-91db3eeca51d"}]}, "flavor": {"id": "8030bc1a-9afb-4678-ac07-8b59a1275925", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/8030bc1a-9afb-4678-ac07-8b59a1275925"}]}, "created": "2026-02-19T20:20:29Z", "updated": "2026-02-19T20:20:40Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.76", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:2c:50:54"}, {"version": 4, "addr": "192.168.122.174", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:2c:50:54"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/1cda3ab8-0805-4bcd-955c-996994fd3cb4"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/1cda3ab8-0805-4bcd-955c-996994fd3cb4"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-02-19T20:20:40.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000004", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.586 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/1cda3ab8-0805-4bcd-955c-996994fd3cb4 used request id req-4e27d7dc-d3ad-4e38-afeb-160e2c146650 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.588 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1cda3ab8-0805-4bcd-955c-996994fd3cb4', 'name': 'vn-h4amqsx-jiq3zjubtpvr-5uw2ts4vboyi-vnf-jucboitrw5qp', 'flavor': {'id': '8030bc1a-9afb-4678-ac07-8b59a1275925', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e1a79c75-2fa3-410d-9c4c-91db3eeca51d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '59f01dee51a74ac1a9f82733f591827d', 'user_id': '9f5597a45dc34ee19bcfe938afde768f', 'hostId': 'fd9f80e206ee2256ddb900effab6d3e51f96886da6d1a8f886ddbab7', 'status': 'active', 'metadata': {'metering.server_group': '78adc0ea-8772-4283-8bd6-6dbdcecee09e'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.593 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '0975826c-6016-48c8-a7dd-1b10a32f91ba', 'name': 'vn-h4amqsx-kmyzbqhhqloy-unhgieiyt6e3-vnf-p7rghgh5js3a', 'flavor': {'id': '8030bc1a-9afb-4678-ac07-8b59a1275925', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e1a79c75-2fa3-410d-9c4c-91db3eeca51d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '59f01dee51a74ac1a9f82733f591827d', 'user_id': '9f5597a45dc34ee19bcfe938afde768f', 'hostId': 'fd9f80e206ee2256ddb900effab6d3e51f96886da6d1a8f886ddbab7', 'status': 'active', 'metadata': {'metering.server_group': '78adc0ea-8772-4283-8bd6-6dbdcecee09e'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.594 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.594 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.595 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.595 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.596 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-19T20:21:15.595295) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.604 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.611 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.617 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 1cda3ab8-0805-4bcd-955c-996994fd3cb4 / tapbbe0af68-c9 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.618 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.625 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.626 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.626 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fa4f672a480>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.626 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.626 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.627 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.627 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.627 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.628 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-02-19T20:21:15.627297) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.628 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-h4amqsx-jiq3zjubtpvr-5uw2ts4vboyi-vnf-jucboitrw5qp>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-h4amqsx-jiq3zjubtpvr-5uw2ts4vboyi-vnf-jucboitrw5qp>]
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.628 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fa4f672a180>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.628 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.629 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.629 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.629 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.629 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.630 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/network.outgoing.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.631 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-19T20:21:15.629648) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.631 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.outgoing.packets volume: 7 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.632 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.outgoing.packets volume: 60 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.633 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.633 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fa4f672bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.633 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.634 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.634 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.634 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.634 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-19T20:21:15.634423) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.634 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.635 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.635 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.636 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.incoming.bytes.delta volume: 3431 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.637 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.637 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fa4f672a270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.638 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.638 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.638 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.638 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.639 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-19T20:21:15.638667) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.639 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.639 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/network.outgoing.bytes volume: 2216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.640 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.outgoing.bytes volume: 1163 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.640 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.outgoing.bytes volume: 7172 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.641 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.641 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fa4f6728ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.641 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.642 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.642 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.642 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.642 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-19T20:21:15.642443) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.677 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.710 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.749 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.784 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.785 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.785 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fa4f672a300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.785 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.785 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.785 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.786 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.786 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.786 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/network.outgoing.bytes.delta volume: 550 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.786 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.787 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.outgoing.bytes.delta volume: 2408 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.787 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-19T20:21:15.785957) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.788 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.788 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fa4f672ab70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.788 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.788 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.788 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.788 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.789 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-19T20:21:15.788885) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.820 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.822 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.823 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.844 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.844 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.845 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.866 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.866 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.866 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.886 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.887 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.887 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:15.999 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.000 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fa4f6728290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.000 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.000 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.000 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.001 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.001 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-19T20:21:16.001023) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.075 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.076 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.076 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.133 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.134 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.134 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.204 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.205 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.205 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.277 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.278 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.278 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.279 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.280 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fa4f69216a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.280 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.280 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.280 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.280 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.281 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/cpu volume: 37630000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.281 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/cpu volume: 32830000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.282 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/cpu volume: 33040000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.282 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/cpu volume: 342920000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.283 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.283 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fa4f67286b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.284 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.284 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a390>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.284 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a390>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.285 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.285 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-19T20:21:16.280892) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.285 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.285 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-h4amqsx-jiq3zjubtpvr-5uw2ts4vboyi-vnf-jucboitrw5qp>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-h4amqsx-jiq3zjubtpvr-5uw2ts4vboyi-vnf-jucboitrw5qp>]
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.286 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fa4f67283b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.287 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.287 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-02-19T20:21:16.285085) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.287 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.287 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.287 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.288 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.latency volume: 658474829 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.288 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-19T20:21:16.287732) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.288 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.latency volume: 116712843 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.289 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.latency volume: 151528840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.289 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.read.latency volume: 786473372 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.290 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.read.latency volume: 127444335 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.290 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.read.latency volume: 200419857 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.291 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.latency volume: 683601533 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.291 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.latency volume: 109290795 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.292 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.latency volume: 110141141 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.292 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.latency volume: 699163782 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.292 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.latency volume: 126021412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.293 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.latency volume: 99179876 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.294 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.295 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fa4f672a120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.295 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.295 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.295 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.295 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.296 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.296 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.296 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.297 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.297 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.297 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fa4f672a1b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.297 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.297 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.297 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.298 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.298 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.298 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.298 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.299 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.299 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.299 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fa4f6728410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.299 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.300 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.300 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-19T20:21:16.295810) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.300 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.300 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-19T20:21:16.298044) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.300 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.300 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.301 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.301 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.301 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.301 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.302 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.302 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.302 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.302 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.303 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.303 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.303 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.304 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.304 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fa4f672a150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.304 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.304 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.304 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.305 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.305 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.305 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.305 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.incoming.packets volume: 11 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.305 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.incoming.packets volume: 54 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.306 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.306 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fa4f6728470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.306 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.306 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.306 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.306 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.307 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.307 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.307 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.307 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.308 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.308 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.308 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.309 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-19T20:21:16.300509) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.309 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.309 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-19T20:21:16.304982) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.309 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-19T20:21:16.306962) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.309 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.309 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.310 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.310 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.310 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.311 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fa4f68f6030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.311 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.311 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.311 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.311 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.311 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.311 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.312 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.312 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.312 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.313 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.313 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.bytes volume: 41590784 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.313 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.313 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.314 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.bytes volume: 41852928 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.314 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.314 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.315 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.315 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fa4f672ab10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.315 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.315 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.315 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-19T20:21:16.311508) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.316 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.316 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.316 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.316 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.317 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-19T20:21:16.316126) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.317 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.317 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.317 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.317 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.318 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.318 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.318 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.318 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.allocation volume: 21962752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.319 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.319 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.320 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.320 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fa4f6728500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.320 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.320 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.320 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.320 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.320 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.latency volume: 2413036213 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.320 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.latency volume: 10941917 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.321 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.321 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.write.latency volume: 1278193356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.321 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.write.latency volume: 16674926 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.322 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-19T20:21:16.320575) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.322 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.322 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.latency volume: 1895732294 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.322 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.latency volume: 10533639 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.323 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.323 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.latency volume: 1803958147 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.323 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.latency volume: 9187833 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.323 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.324 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.324 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fa4f672a0c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.324 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.324 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.324 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.324 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.324 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.325 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-19T20:21:16.324846) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.325 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.325 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.326 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.326 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.326 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fa4f6728560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.326 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.326 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.326 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.327 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.327 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.327 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.327 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-19T20:21:16.326985) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.327 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.328 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.328 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.328 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.328 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.requests volume: 214 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.329 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.329 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.329 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.requests volume: 240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.329 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.330 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.330 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.330 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fa4f67285c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.330 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.330 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.331 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.331 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.331 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.332 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-19T20:21:16.331122) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.332 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fa4f6728620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.332 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.332 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.332 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.332 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.333 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.333 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fa4f672be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.333 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.334 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.334 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.334 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.334 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-19T20:21:16.332805) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.334 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/memory.usage volume: 48.765625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.334 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-19T20:21:16.334263) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.334 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/memory.usage volume: 49.078125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.335 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/memory.usage volume: 33.2734375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.335 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/memory.usage volume: 48.9453125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.335 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.336 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fa4f672be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.336 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.336 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.336 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.336 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.336 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.bytes volume: 2136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.336 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-19T20:21:16.336463) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.337 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.337 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.incoming.bytes volume: 1388 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.337 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.incoming.bytes volume: 8364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.338 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.338 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.338 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.338 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.338 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.338 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.338 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.339 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.339 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.339 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.339 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.339 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.339 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.339 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.339 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.339 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.339 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.339 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.339 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.340 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.340 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.340 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.340 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.340 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.340 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.340 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:21:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:21:16.340 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:21:17 compute-0 nova_compute[188777]: 2026-02-19 20:21:17.854 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:21:18 compute-0 nova_compute[188777]: 2026-02-19 20:21:18.600 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:21:22 compute-0 sshd-session[245651]: Received disconnect from 160.187.147.124 port 35678:11: Bye Bye [preauth]
Feb 19 20:21:22 compute-0 sshd-session[245651]: Disconnected from authenticating user root 160.187.147.124 port 35678 [preauth]
Feb 19 20:21:22 compute-0 nova_compute[188777]: 2026-02-19 20:21:22.857 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:21:23 compute-0 podman[245654]: 2026-02-19 20:21:23.385682656 +0000 UTC m=+0.058605499 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 19 20:21:23 compute-0 podman[245653]: 2026-02-19 20:21:23.409153409 +0000 UTC m=+0.086070276 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, version=9.7, io.openshift.expose-services=, config_id=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, release=1770267347, architecture=x86_64, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter)
Feb 19 20:21:23 compute-0 nova_compute[188777]: 2026-02-19 20:21:23.602 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:21:27 compute-0 nova_compute[188777]: 2026-02-19 20:21:27.860 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:21:28 compute-0 podman[245697]: 2026-02-19 20:21:28.404757721 +0000 UTC m=+0.083805525 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 19 20:21:28 compute-0 nova_compute[188777]: 2026-02-19 20:21:28.605 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:21:29 compute-0 podman[204724]: time="2026-02-19T20:21:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:21:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:21:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 19 20:21:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:21:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4378 "" "Go-http-client/1.1"
Feb 19 20:21:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:21:30.432 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:21:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:21:30.433 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:21:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:21:30.434 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:21:31 compute-0 openstack_network_exporter[207898]: ERROR   20:21:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:21:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:21:31 compute-0 openstack_network_exporter[207898]: ERROR   20:21:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:21:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:21:31 compute-0 podman[245717]: 2026-02-19 20:21:31.446188142 +0000 UTC m=+0.107766523 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible)
Feb 19 20:21:31 compute-0 podman[245716]: 2026-02-19 20:21:31.465616018 +0000 UTC m=+0.136248722 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, build-date=2024-09-18T21:23:30, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_id=kepler, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, vcs-type=git, vendor=Red Hat, Inc., version=9.4, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9)
Feb 19 20:21:32 compute-0 nova_compute[188777]: 2026-02-19 20:21:32.865 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:21:33 compute-0 nova_compute[188777]: 2026-02-19 20:21:33.608 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:21:34 compute-0 sshd-session[245755]: Received disconnect from 83.235.16.111 port 41126:11: Bye Bye [preauth]
Feb 19 20:21:34 compute-0 sshd-session[245755]: Disconnected from authenticating user root 83.235.16.111 port 41126 [preauth]
Feb 19 20:21:35 compute-0 podman[245757]: 2026-02-19 20:21:35.415840153 +0000 UTC m=+0.093506747 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 19 20:21:36 compute-0 nova_compute[188777]: 2026-02-19 20:21:36.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:21:36 compute-0 nova_compute[188777]: 2026-02-19 20:21:36.264 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:21:37 compute-0 nova_compute[188777]: 2026-02-19 20:21:37.867 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:21:38 compute-0 nova_compute[188777]: 2026-02-19 20:21:38.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:21:38 compute-0 podman[245781]: 2026-02-19 20:21:38.399024111 +0000 UTC m=+0.087637295 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image)
Feb 19 20:21:38 compute-0 nova_compute[188777]: 2026-02-19 20:21:38.611 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:21:40 compute-0 nova_compute[188777]: 2026-02-19 20:21:40.260 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:21:40 compute-0 nova_compute[188777]: 2026-02-19 20:21:40.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:21:41 compute-0 nova_compute[188777]: 2026-02-19 20:21:41.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:21:41 compute-0 podman[245801]: 2026-02-19 20:21:41.461329877 +0000 UTC m=+0.148753512 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb 19 20:21:42 compute-0 nova_compute[188777]: 2026-02-19 20:21:42.871 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:21:43 compute-0 nova_compute[188777]: 2026-02-19 20:21:43.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:21:43 compute-0 nova_compute[188777]: 2026-02-19 20:21:43.265 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:21:43 compute-0 nova_compute[188777]: 2026-02-19 20:21:43.266 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 19 20:21:43 compute-0 nova_compute[188777]: 2026-02-19 20:21:43.488 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "refresh_cache-5aaac42d-946d-4c6f-9bde-23b8b6613b59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:21:43 compute-0 nova_compute[188777]: 2026-02-19 20:21:43.488 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquired lock "refresh_cache-5aaac42d-946d-4c6f-9bde-23b8b6613b59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:21:43 compute-0 nova_compute[188777]: 2026-02-19 20:21:43.489 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 19 20:21:43 compute-0 nova_compute[188777]: 2026-02-19 20:21:43.489 188781 DEBUG nova.objects.instance [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5aaac42d-946d-4c6f-9bde-23b8b6613b59 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:21:43 compute-0 nova_compute[188777]: 2026-02-19 20:21:43.614 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:21:44 compute-0 nova_compute[188777]: 2026-02-19 20:21:44.351 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Updating instance_info_cache with network_info: [{"id": "10027d6c-43cc-4a7c-be42-a49c8c914f25", "address": "fa:16:3e:e4:9e:14", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.193", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10027d6c-43", "ovs_interfaceid": "10027d6c-43cc-4a7c-be42-a49c8c914f25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:21:44 compute-0 nova_compute[188777]: 2026-02-19 20:21:44.368 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Releasing lock "refresh_cache-5aaac42d-946d-4c6f-9bde-23b8b6613b59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:21:44 compute-0 nova_compute[188777]: 2026-02-19 20:21:44.369 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 19 20:21:44 compute-0 nova_compute[188777]: 2026-02-19 20:21:44.370 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:21:44 compute-0 nova_compute[188777]: 2026-02-19 20:21:44.371 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:21:44 compute-0 nova_compute[188777]: 2026-02-19 20:21:44.399 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:21:44 compute-0 nova_compute[188777]: 2026-02-19 20:21:44.399 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:21:44 compute-0 nova_compute[188777]: 2026-02-19 20:21:44.399 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:21:44 compute-0 nova_compute[188777]: 2026-02-19 20:21:44.400 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:21:44 compute-0 nova_compute[188777]: 2026-02-19 20:21:44.511 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:21:44 compute-0 nova_compute[188777]: 2026-02-19 20:21:44.601 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:21:44 compute-0 nova_compute[188777]: 2026-02-19 20:21:44.602 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:21:44 compute-0 nova_compute[188777]: 2026-02-19 20:21:44.691 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:21:44 compute-0 nova_compute[188777]: 2026-02-19 20:21:44.693 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:21:44 compute-0 nova_compute[188777]: 2026-02-19 20:21:44.761 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:21:44 compute-0 nova_compute[188777]: 2026-02-19 20:21:44.763 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:21:44 compute-0 nova_compute[188777]: 2026-02-19 20:21:44.816 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:21:44 compute-0 nova_compute[188777]: 2026-02-19 20:21:44.826 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:21:44 compute-0 nova_compute[188777]: 2026-02-19 20:21:44.884 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:21:44 compute-0 nova_compute[188777]: 2026-02-19 20:21:44.897 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:21:44 compute-0 nova_compute[188777]: 2026-02-19 20:21:44.947 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:21:44 compute-0 nova_compute[188777]: 2026-02-19 20:21:44.950 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:21:45 compute-0 nova_compute[188777]: 2026-02-19 20:21:45.044 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.eph0 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:21:45 compute-0 nova_compute[188777]: 2026-02-19 20:21:45.046 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:21:45 compute-0 nova_compute[188777]: 2026-02-19 20:21:45.107 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:21:45 compute-0 nova_compute[188777]: 2026-02-19 20:21:45.118 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:21:45 compute-0 nova_compute[188777]: 2026-02-19 20:21:45.181 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:21:45 compute-0 nova_compute[188777]: 2026-02-19 20:21:45.182 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:21:45 compute-0 nova_compute[188777]: 2026-02-19 20:21:45.245 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:21:45 compute-0 nova_compute[188777]: 2026-02-19 20:21:45.247 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:21:45 compute-0 nova_compute[188777]: 2026-02-19 20:21:45.346 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:21:45 compute-0 nova_compute[188777]: 2026-02-19 20:21:45.348 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:21:45 compute-0 nova_compute[188777]: 2026-02-19 20:21:45.424 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:21:45 compute-0 nova_compute[188777]: 2026-02-19 20:21:45.435 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:21:45 compute-0 nova_compute[188777]: 2026-02-19 20:21:45.494 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:21:45 compute-0 nova_compute[188777]: 2026-02-19 20:21:45.495 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:21:45 compute-0 nova_compute[188777]: 2026-02-19 20:21:45.577 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:21:45 compute-0 nova_compute[188777]: 2026-02-19 20:21:45.578 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:21:45 compute-0 nova_compute[188777]: 2026-02-19 20:21:45.660 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:21:45 compute-0 nova_compute[188777]: 2026-02-19 20:21:45.660 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:21:45 compute-0 nova_compute[188777]: 2026-02-19 20:21:45.721 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:21:46 compute-0 nova_compute[188777]: 2026-02-19 20:21:46.136 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:21:46 compute-0 nova_compute[188777]: 2026-02-19 20:21:46.137 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4659MB free_disk=72.1816520690918GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:21:46 compute-0 nova_compute[188777]: 2026-02-19 20:21:46.137 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:21:46 compute-0 nova_compute[188777]: 2026-02-19 20:21:46.137 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:21:46 compute-0 nova_compute[188777]: 2026-02-19 20:21:46.214 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 5aaac42d-946d-4c6f-9bde-23b8b6613b59 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:21:46 compute-0 nova_compute[188777]: 2026-02-19 20:21:46.214 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 0975826c-6016-48c8-a7dd-1b10a32f91ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:21:46 compute-0 nova_compute[188777]: 2026-02-19 20:21:46.214 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 14ed9fe0-b150-4bd8-852e-7f2f62d4374b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:21:46 compute-0 nova_compute[188777]: 2026-02-19 20:21:46.214 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 1cda3ab8-0805-4bcd-955c-996994fd3cb4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:21:46 compute-0 nova_compute[188777]: 2026-02-19 20:21:46.214 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:21:46 compute-0 nova_compute[188777]: 2026-02-19 20:21:46.215 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:21:46 compute-0 nova_compute[188777]: 2026-02-19 20:21:46.301 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:21:46 compute-0 nova_compute[188777]: 2026-02-19 20:21:46.316 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:21:46 compute-0 nova_compute[188777]: 2026-02-19 20:21:46.318 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:21:46 compute-0 nova_compute[188777]: 2026-02-19 20:21:46.318 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.181s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:21:47 compute-0 nova_compute[188777]: 2026-02-19 20:21:47.874 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:21:48 compute-0 nova_compute[188777]: 2026-02-19 20:21:48.210 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:21:48 compute-0 nova_compute[188777]: 2026-02-19 20:21:48.617 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:21:52 compute-0 nova_compute[188777]: 2026-02-19 20:21:52.876 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:21:53 compute-0 nova_compute[188777]: 2026-02-19 20:21:53.621 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:21:54 compute-0 podman[245882]: 2026-02-19 20:21:54.422224736 +0000 UTC m=+0.091275839 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 19 20:21:54 compute-0 podman[245881]: 2026-02-19 20:21:54.439763283 +0000 UTC m=+0.114321497 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2026-02-05T04:57:10Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, distribution-scope=public, managed_by=edpm_ansible, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=minimal rhel9, vcs-type=git, io.openshift.expose-services=, version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=ubi9/ubi-minimal, io.buildah.version=1.33.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Feb 19 20:21:57 compute-0 nova_compute[188777]: 2026-02-19 20:21:57.880 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:21:58 compute-0 nova_compute[188777]: 2026-02-19 20:21:58.625 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:21:59 compute-0 podman[245925]: 2026-02-19 20:21:59.420470765 +0000 UTC m=+0.102013894 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 19 20:21:59 compute-0 podman[204724]: time="2026-02-19T20:21:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:21:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:21:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 19 20:21:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:21:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4378 "" "Go-http-client/1.1"
Feb 19 20:22:01 compute-0 openstack_network_exporter[207898]: ERROR   20:22:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:22:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:22:01 compute-0 openstack_network_exporter[207898]: ERROR   20:22:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:22:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:22:02 compute-0 podman[245943]: 2026-02-19 20:22:02.404429961 +0000 UTC m=+0.088072519 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, name=ubi9, release=1214.1726694543, release-0.7.12=, vendor=Red Hat, Inc., io.buildah.version=1.29.0, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, container_name=kepler, config_id=kepler, io.openshift.tags=base rhel9)
Feb 19 20:22:02 compute-0 podman[245944]: 2026-02-19 20:22:02.412138212 +0000 UTC m=+0.094244631 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, container_name=ceilometer_agent_ipmi, tcib_managed=true, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Feb 19 20:22:02 compute-0 nova_compute[188777]: 2026-02-19 20:22:02.882 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:22:03 compute-0 nova_compute[188777]: 2026-02-19 20:22:03.628 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:22:06 compute-0 podman[245982]: 2026-02-19 20:22:06.426948941 +0000 UTC m=+0.105089929 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 19 20:22:07 compute-0 nova_compute[188777]: 2026-02-19 20:22:07.886 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:22:08 compute-0 nova_compute[188777]: 2026-02-19 20:22:08.633 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:22:09 compute-0 podman[246004]: 2026-02-19 20:22:09.426477568 +0000 UTC m=+0.100133665 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:22:12 compute-0 podman[246026]: 2026-02-19 20:22:12.426586993 +0000 UTC m=+0.106645698 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 20:22:12 compute-0 nova_compute[188777]: 2026-02-19 20:22:12.888 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:22:13 compute-0 nova_compute[188777]: 2026-02-19 20:22:13.636 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:22:17 compute-0 nova_compute[188777]: 2026-02-19 20:22:17.891 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:22:18 compute-0 nova_compute[188777]: 2026-02-19 20:22:18.641 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:22:19 compute-0 sshd-session[246024]: Invalid user uatmon from 103.103.245.7 port 36996
Feb 19 20:22:20 compute-0 sshd-session[246024]: Received disconnect from 103.103.245.7 port 36996:11: Bye Bye [preauth]
Feb 19 20:22:20 compute-0 sshd-session[246024]: Disconnected from invalid user uatmon 103.103.245.7 port 36996 [preauth]
Feb 19 20:22:22 compute-0 nova_compute[188777]: 2026-02-19 20:22:22.893 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:22:23 compute-0 nova_compute[188777]: 2026-02-19 20:22:23.644 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:22:25 compute-0 podman[246053]: 2026-02-19 20:22:25.416460796 +0000 UTC m=+0.091560977 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, distribution-scope=public, name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.7, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, architecture=x86_64, build-date=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, container_name=openstack_network_exporter)
Feb 19 20:22:25 compute-0 podman[246054]: 2026-02-19 20:22:25.437979827 +0000 UTC m=+0.106419151 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 19 20:22:27 compute-0 nova_compute[188777]: 2026-02-19 20:22:27.898 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:22:28 compute-0 nova_compute[188777]: 2026-02-19 20:22:28.647 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:22:29 compute-0 podman[204724]: time="2026-02-19T20:22:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:22:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:22:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 19 20:22:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:22:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4378 "" "Go-http-client/1.1"
Feb 19 20:22:30 compute-0 podman[246099]: 2026-02-19 20:22:30.414681836 +0000 UTC m=+0.094696315 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 20:22:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:22:30.433 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:22:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:22:30.434 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:22:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:22:30.434 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:22:31 compute-0 openstack_network_exporter[207898]: ERROR   20:22:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:22:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:22:31 compute-0 openstack_network_exporter[207898]: ERROR   20:22:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:22:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:22:32 compute-0 sshd-session[246052]: error: kex_exchange_identification: read: Connection timed out
Feb 19 20:22:32 compute-0 sshd-session[246052]: banner exchange: Connection from 125.94.106.195 port 39434: Connection timed out
Feb 19 20:22:32 compute-0 nova_compute[188777]: 2026-02-19 20:22:32.899 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:22:33 compute-0 podman[246117]: 2026-02-19 20:22:33.413195946 +0000 UTC m=+0.088978346 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1214.1726694543, io.openshift.tags=base rhel9, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler)
Feb 19 20:22:33 compute-0 podman[246118]: 2026-02-19 20:22:33.422931899 +0000 UTC m=+0.098369120 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 20:22:33 compute-0 nova_compute[188777]: 2026-02-19 20:22:33.650 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:22:36 compute-0 nova_compute[188777]: 2026-02-19 20:22:36.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:22:36 compute-0 nova_compute[188777]: 2026-02-19 20:22:36.264 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:22:37 compute-0 podman[246156]: 2026-02-19 20:22:37.394455048 +0000 UTC m=+0.073234556 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 19 20:22:37 compute-0 nova_compute[188777]: 2026-02-19 20:22:37.901 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:22:38 compute-0 nova_compute[188777]: 2026-02-19 20:22:38.653 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:22:40 compute-0 nova_compute[188777]: 2026-02-19 20:22:40.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:22:40 compute-0 podman[246179]: 2026-02-19 20:22:40.416269123 +0000 UTC m=+0.095365004 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Feb 19 20:22:41 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Feb 19 20:22:41 compute-0 nova_compute[188777]: 2026-02-19 20:22:41.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:22:42 compute-0 nova_compute[188777]: 2026-02-19 20:22:42.259 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:22:42 compute-0 nova_compute[188777]: 2026-02-19 20:22:42.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:22:42 compute-0 nova_compute[188777]: 2026-02-19 20:22:42.903 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:22:43 compute-0 podman[246199]: 2026-02-19 20:22:43.492222907 +0000 UTC m=+0.165748358 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 19 20:22:43 compute-0 nova_compute[188777]: 2026-02-19 20:22:43.657 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:22:44 compute-0 nova_compute[188777]: 2026-02-19 20:22:44.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:22:44 compute-0 nova_compute[188777]: 2026-02-19 20:22:44.264 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:22:44 compute-0 nova_compute[188777]: 2026-02-19 20:22:44.523 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "refresh_cache-0975826c-6016-48c8-a7dd-1b10a32f91ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:22:44 compute-0 nova_compute[188777]: 2026-02-19 20:22:44.524 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquired lock "refresh_cache-0975826c-6016-48c8-a7dd-1b10a32f91ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:22:44 compute-0 nova_compute[188777]: 2026-02-19 20:22:44.524 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 19 20:22:45 compute-0 nova_compute[188777]: 2026-02-19 20:22:45.738 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Updating instance_info_cache with network_info: [{"id": "db2ce91f-7740-44a2-bab1-8455e2dfddde", "address": "fa:16:3e:4d:93:1a", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.213", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb2ce91f-77", "ovs_interfaceid": "db2ce91f-7740-44a2-bab1-8455e2dfddde", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:22:45 compute-0 nova_compute[188777]: 2026-02-19 20:22:45.759 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Releasing lock "refresh_cache-0975826c-6016-48c8-a7dd-1b10a32f91ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:22:45 compute-0 nova_compute[188777]: 2026-02-19 20:22:45.760 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 19 20:22:45 compute-0 nova_compute[188777]: 2026-02-19 20:22:45.761 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:22:45 compute-0 nova_compute[188777]: 2026-02-19 20:22:45.762 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:22:45 compute-0 nova_compute[188777]: 2026-02-19 20:22:45.791 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:22:45 compute-0 nova_compute[188777]: 2026-02-19 20:22:45.792 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:22:45 compute-0 nova_compute[188777]: 2026-02-19 20:22:45.792 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:22:45 compute-0 nova_compute[188777]: 2026-02-19 20:22:45.793 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:22:45 compute-0 nova_compute[188777]: 2026-02-19 20:22:45.903 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:22:45 compute-0 nova_compute[188777]: 2026-02-19 20:22:45.984 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:22:45 compute-0 nova_compute[188777]: 2026-02-19 20:22:45.985 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:22:46 compute-0 nova_compute[188777]: 2026-02-19 20:22:46.066 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:22:46 compute-0 nova_compute[188777]: 2026-02-19 20:22:46.067 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:22:46 compute-0 nova_compute[188777]: 2026-02-19 20:22:46.147 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:22:46 compute-0 nova_compute[188777]: 2026-02-19 20:22:46.148 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:22:46 compute-0 nova_compute[188777]: 2026-02-19 20:22:46.227 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:22:46 compute-0 nova_compute[188777]: 2026-02-19 20:22:46.236 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:22:46 compute-0 nova_compute[188777]: 2026-02-19 20:22:46.296 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:22:46 compute-0 nova_compute[188777]: 2026-02-19 20:22:46.298 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:22:46 compute-0 nova_compute[188777]: 2026-02-19 20:22:46.366 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:22:46 compute-0 nova_compute[188777]: 2026-02-19 20:22:46.368 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:22:46 compute-0 nova_compute[188777]: 2026-02-19 20:22:46.420 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.eph0 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:22:46 compute-0 nova_compute[188777]: 2026-02-19 20:22:46.421 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:22:46 compute-0 nova_compute[188777]: 2026-02-19 20:22:46.485 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:22:46 compute-0 nova_compute[188777]: 2026-02-19 20:22:46.495 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:22:46 compute-0 nova_compute[188777]: 2026-02-19 20:22:46.577 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:22:46 compute-0 nova_compute[188777]: 2026-02-19 20:22:46.578 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:22:46 compute-0 nova_compute[188777]: 2026-02-19 20:22:46.657 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:22:46 compute-0 nova_compute[188777]: 2026-02-19 20:22:46.660 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:22:46 compute-0 nova_compute[188777]: 2026-02-19 20:22:46.742 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:22:46 compute-0 nova_compute[188777]: 2026-02-19 20:22:46.743 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:22:46 compute-0 nova_compute[188777]: 2026-02-19 20:22:46.808 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:22:46 compute-0 nova_compute[188777]: 2026-02-19 20:22:46.814 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:22:46 compute-0 nova_compute[188777]: 2026-02-19 20:22:46.888 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:22:46 compute-0 nova_compute[188777]: 2026-02-19 20:22:46.889 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:22:46 compute-0 nova_compute[188777]: 2026-02-19 20:22:46.948 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:22:46 compute-0 nova_compute[188777]: 2026-02-19 20:22:46.952 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:22:47 compute-0 nova_compute[188777]: 2026-02-19 20:22:47.016 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:22:47 compute-0 nova_compute[188777]: 2026-02-19 20:22:47.017 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:22:47 compute-0 nova_compute[188777]: 2026-02-19 20:22:47.079 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:22:47 compute-0 nova_compute[188777]: 2026-02-19 20:22:47.487 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:22:47 compute-0 nova_compute[188777]: 2026-02-19 20:22:47.488 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4626MB free_disk=72.1816520690918GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:22:47 compute-0 nova_compute[188777]: 2026-02-19 20:22:47.488 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:22:47 compute-0 nova_compute[188777]: 2026-02-19 20:22:47.489 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:22:47 compute-0 nova_compute[188777]: 2026-02-19 20:22:47.600 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 5aaac42d-946d-4c6f-9bde-23b8b6613b59 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:22:47 compute-0 nova_compute[188777]: 2026-02-19 20:22:47.601 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 0975826c-6016-48c8-a7dd-1b10a32f91ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:22:47 compute-0 nova_compute[188777]: 2026-02-19 20:22:47.601 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 14ed9fe0-b150-4bd8-852e-7f2f62d4374b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:22:47 compute-0 nova_compute[188777]: 2026-02-19 20:22:47.601 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 1cda3ab8-0805-4bcd-955c-996994fd3cb4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:22:47 compute-0 nova_compute[188777]: 2026-02-19 20:22:47.602 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:22:47 compute-0 nova_compute[188777]: 2026-02-19 20:22:47.604 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:22:47 compute-0 nova_compute[188777]: 2026-02-19 20:22:47.758 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:22:47 compute-0 nova_compute[188777]: 2026-02-19 20:22:47.782 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:22:47 compute-0 nova_compute[188777]: 2026-02-19 20:22:47.784 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:22:47 compute-0 nova_compute[188777]: 2026-02-19 20:22:47.784 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.295s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:22:47 compute-0 nova_compute[188777]: 2026-02-19 20:22:47.905 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:22:48 compute-0 nova_compute[188777]: 2026-02-19 20:22:48.660 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:22:48 compute-0 nova_compute[188777]: 2026-02-19 20:22:48.780 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:22:48 compute-0 nova_compute[188777]: 2026-02-19 20:22:48.817 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:22:52 compute-0 nova_compute[188777]: 2026-02-19 20:22:52.909 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:22:53 compute-0 nova_compute[188777]: 2026-02-19 20:22:53.664 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:22:56 compute-0 podman[246275]: 2026-02-19 20:22:56.437256328 +0000 UTC m=+0.108834464 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 19 20:22:56 compute-0 podman[246274]: 2026-02-19 20:22:56.463054871 +0000 UTC m=+0.134823472 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, managed_by=edpm_ansible, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=9.7, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Feb 19 20:22:57 compute-0 nova_compute[188777]: 2026-02-19 20:22:57.912 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:22:58 compute-0 nova_compute[188777]: 2026-02-19 20:22:58.667 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:22:59 compute-0 podman[204724]: time="2026-02-19T20:22:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:22:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:22:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 19 20:22:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:22:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4370 "" "Go-http-client/1.1"
Feb 19 20:23:01 compute-0 anacron[148260]: Job `cron.daily' started
Feb 19 20:23:01 compute-0 anacron[148260]: Job `cron.daily' terminated
Feb 19 20:23:01 compute-0 openstack_network_exporter[207898]: ERROR   20:23:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:23:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:23:01 compute-0 openstack_network_exporter[207898]: ERROR   20:23:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:23:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:23:01 compute-0 podman[246317]: 2026-02-19 20:23:01.44990521 +0000 UTC m=+0.124941045 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 20:23:02 compute-0 nova_compute[188777]: 2026-02-19 20:23:02.915 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:23:03 compute-0 nova_compute[188777]: 2026-02-19 20:23:03.670 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:23:04 compute-0 podman[246336]: 2026-02-19 20:23:04.452713094 +0000 UTC m=+0.115996817 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, distribution-scope=public, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, version=9.4, io.openshift.expose-services=, io.buildah.version=1.29.0, release-0.7.12=, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, vendor=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler)
Feb 19 20:23:04 compute-0 podman[246337]: 2026-02-19 20:23:04.46128117 +0000 UTC m=+0.120711543 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 19 20:23:07 compute-0 nova_compute[188777]: 2026-02-19 20:23:07.918 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:23:08 compute-0 podman[246373]: 2026-02-19 20:23:08.457092701 +0000 UTC m=+0.136814276 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 19 20:23:08 compute-0 nova_compute[188777]: 2026-02-19 20:23:08.674 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:23:11 compute-0 podman[246398]: 2026-02-19 20:23:11.463115974 +0000 UTC m=+0.139096947 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0)
Feb 19 20:23:12 compute-0 nova_compute[188777]: 2026-02-19 20:23:12.920 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:23:13 compute-0 nova_compute[188777]: 2026-02-19 20:23:13.677 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:23:14 compute-0 podman[246417]: 2026-02-19 20:23:14.461706185 +0000 UTC m=+0.152990700 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.140 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.142 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f521c920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.143 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fa4f6728800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.143 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f521c920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.145 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f521c920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.145 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f521c920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.145 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f521c920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.145 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f521c920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.145 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f521c920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.145 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f521c920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.146 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f521c920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.146 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f521c920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.146 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a390>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f521c920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.146 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f521c920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.146 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f521c920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.146 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f521c920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.146 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f521c920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.146 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f521c920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.146 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f521c920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.146 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f521c920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.147 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f521c920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.147 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f521c920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.148 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f521c920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.148 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f521c920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.148 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f521c920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.149 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f521c920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.149 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f521c920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.149 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f521c920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.152 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '5aaac42d-946d-4c6f-9bde-23b8b6613b59', 'name': 'test_0', 'flavor': {'id': '8030bc1a-9afb-4678-ac07-8b59a1275925', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e1a79c75-2fa3-410d-9c4c-91db3eeca51d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '59f01dee51a74ac1a9f82733f591827d', 'user_id': '9f5597a45dc34ee19bcfe938afde768f', 'hostId': 'fd9f80e206ee2256ddb900effab6d3e51f96886da6d1a8f886ddbab7', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.157 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '14ed9fe0-b150-4bd8-852e-7f2f62d4374b', 'name': 'vn-h4amqsx-zdyrztqs2ra5-eeiurm4z7i6z-vnf-hs7qdifsqkdp', 'flavor': {'id': '8030bc1a-9afb-4678-ac07-8b59a1275925', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e1a79c75-2fa3-410d-9c4c-91db3eeca51d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '59f01dee51a74ac1a9f82733f591827d', 'user_id': '9f5597a45dc34ee19bcfe938afde768f', 'hostId': 'fd9f80e206ee2256ddb900effab6d3e51f96886da6d1a8f886ddbab7', 'status': 'active', 'metadata': {'metering.server_group': '78adc0ea-8772-4283-8bd6-6dbdcecee09e'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.164 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1cda3ab8-0805-4bcd-955c-996994fd3cb4', 'name': 'vn-h4amqsx-jiq3zjubtpvr-5uw2ts4vboyi-vnf-jucboitrw5qp', 'flavor': {'id': '8030bc1a-9afb-4678-ac07-8b59a1275925', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e1a79c75-2fa3-410d-9c4c-91db3eeca51d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '59f01dee51a74ac1a9f82733f591827d', 'user_id': '9f5597a45dc34ee19bcfe938afde768f', 'hostId': 'fd9f80e206ee2256ddb900effab6d3e51f96886da6d1a8f886ddbab7', 'status': 'active', 'metadata': {'metering.server_group': '78adc0ea-8772-4283-8bd6-6dbdcecee09e'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.168 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '0975826c-6016-48c8-a7dd-1b10a32f91ba', 'name': 'vn-h4amqsx-kmyzbqhhqloy-unhgieiyt6e3-vnf-p7rghgh5js3a', 'flavor': {'id': '8030bc1a-9afb-4678-ac07-8b59a1275925', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e1a79c75-2fa3-410d-9c4c-91db3eeca51d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '59f01dee51a74ac1a9f82733f591827d', 'user_id': '9f5597a45dc34ee19bcfe938afde768f', 'hostId': 'fd9f80e206ee2256ddb900effab6d3e51f96886da6d1a8f886ddbab7', 'status': 'active', 'metadata': {'metering.server_group': '78adc0ea-8772-4283-8bd6-6dbdcecee09e'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.169 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.169 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.169 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.169 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.170 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-19T20:23:15.169742) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.175 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.178 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.182 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.185 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.186 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.186 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fa4f672a480>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.186 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.186 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fa4f672a180>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.186 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.186 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.186 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.187 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.187 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.187 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.187 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.187 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.outgoing.packets volume: 61 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.188 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.188 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fa4f672bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.188 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.188 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.188 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.188 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.188 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.188 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.189 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.incoming.bytes.delta volume: 140 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.189 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-19T20:23:15.186972) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.189 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.189 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.189 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fa4f672a270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.189 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.190 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.190 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.190 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.190 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.190 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.190 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.outgoing.bytes volume: 2258 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.190 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.outgoing.bytes volume: 7242 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.191 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.191 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fa4f6728ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.191 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.191 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.191 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.191 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.192 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-19T20:23:15.188542) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.192 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-19T20:23:15.190294) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.193 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-19T20:23:15.191713) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.219 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.245 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.268 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.289 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.291 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.291 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fa4f672a300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.291 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.292 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.292 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.293 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.293 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.294 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.295 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.outgoing.bytes.delta volume: 1095 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.295 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.295 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-19T20:23:15.293115) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.296 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.296 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fa4f672ab70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.297 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.297 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.298 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.298 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.300 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-19T20:23:15.298609) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.324 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.325 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.325 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.350 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.351 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.351 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.378 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.379 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.379 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.413 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.414 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.414 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.415 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.415 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fa4f6728290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.415 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.416 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.416 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.416 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.417 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-19T20:23:15.416460) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.489 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.489 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.490 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.559 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.560 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.560 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.629 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.629 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.629 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.702 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.703 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.703 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.704 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.704 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fa4f69216a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.704 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.704 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.704 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.704 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.705 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/cpu volume: 39310000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.705 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/cpu volume: 34540000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.706 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/cpu volume: 35380000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.706 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/cpu volume: 344670000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.706 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.707 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fa4f67286b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.706 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-19T20:23:15.704872) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.707 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.707 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fa4f67283b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.707 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.707 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.707 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.707 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.708 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.latency volume: 658474829 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.708 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.latency volume: 116712843 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.708 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.latency volume: 151528840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.708 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.read.latency volume: 786473372 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.709 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-19T20:23:15.707826) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.709 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.read.latency volume: 127444335 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.709 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.read.latency volume: 200419857 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.709 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.latency volume: 683601533 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.710 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.latency volume: 109290795 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.710 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.latency volume: 110141141 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.710 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.latency volume: 699163782 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.710 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.latency volume: 126021412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.711 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.latency volume: 99179876 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.712 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.712 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fa4f672a120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.712 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.712 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.712 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.712 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.712 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.713 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-19T20:23:15.712786) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.713 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.713 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.714 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.714 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.714 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fa4f672a1b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.714 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.714 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.715 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.715 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.715 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.715 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-19T20:23:15.715064) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.715 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.716 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.716 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.716 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.716 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fa4f6728410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.717 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.717 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.717 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.717 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.717 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.717 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.718 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.718 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-19T20:23:15.717364) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.718 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.718 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.719 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.719 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.719 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.720 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.720 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.720 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.720 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.721 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.721 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fa4f672a150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.721 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.721 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.721 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.722 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.722 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.722 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.722 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-19T20:23:15.721993) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.722 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.723 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.incoming.packets volume: 54 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.723 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.723 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fa4f6728470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.723 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.723 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.723 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.723 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.724 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-19T20:23:15.723887) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.724 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.724 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.724 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.724 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.725 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.725 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.725 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.726 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.726 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.726 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.726 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.727 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.727 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.728 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fa4f68f6030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.728 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.728 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.728 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.728 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.728 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.728 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.729 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-19T20:23:15.728505) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.729 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.729 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.729 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.730 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.730 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.730 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.730 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.731 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.bytes volume: 41852928 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.731 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.731 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.732 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.732 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fa4f672ab10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.732 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.733 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.733 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.733 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.733 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.733 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.733 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-19T20:23:15.733273) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.734 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.734 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.734 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.735 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.735 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.735 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.735 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.736 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.allocation volume: 21962752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.736 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.736 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.737 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.737 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fa4f6728500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.737 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.738 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.738 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.738 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.738 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.latency volume: 2413036213 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.738 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.latency volume: 10941917 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.738 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-19T20:23:15.738275) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.739 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.739 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.write.latency volume: 1278193356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.739 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.write.latency volume: 16674926 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.739 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.739 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.latency volume: 1916273341 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.740 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.latency volume: 10533639 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.740 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.740 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.latency volume: 1803958147 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.740 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.latency volume: 9187833 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.740 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.741 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.741 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fa4f672a0c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.741 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.741 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.742 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.742 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.742 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.742 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-19T20:23:15.742140) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.742 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.743 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.743 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.743 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.744 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fa4f6728560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.744 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.744 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.744 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.744 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.744 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.745 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.745 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.745 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.745 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.746 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.746 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.747 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-19T20:23:15.744649) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.747 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.747 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.748 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.requests volume: 240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.748 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.749 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.749 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.749 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fa4f67285c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.749 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.749 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.750 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.750 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.751 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.751 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fa4f6728620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.751 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.751 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.751 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.751 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.752 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.752 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fa4f672be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.752 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.752 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.752 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.752 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.752 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/memory.usage volume: 48.765625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.753 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/memory.usage volume: 49.078125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.753 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/memory.usage volume: 49.046875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.753 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/memory.usage volume: 48.9375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.754 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.754 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fa4f672be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.754 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.754 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.754 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.754 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.754 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.bytes volume: 2136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.754 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-19T20:23:15.750329) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.755 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-19T20:23:15.751664) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.755 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-19T20:23:15.752795) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.755 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-19T20:23:15.754590) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.755 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.755 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.incoming.bytes volume: 1528 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.755 15 DEBUG ceilometer.compute.pollsters [-] 0975826c-6016-48c8-a7dd-1b10a32f91ba/network.incoming.bytes volume: 8364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.756 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.756 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.757 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.758 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.758 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.759 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.759 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.759 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.760 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.760 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.761 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.761 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.762 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.762 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.763 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.763 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.764 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.765 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.765 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.765 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.766 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.766 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.767 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.767 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.768 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.768 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:23:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:23:15.768 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:23:17 compute-0 sshd-session[246444]: Received disconnect from 103.250.11.249 port 50066:11: Bye Bye [preauth]
Feb 19 20:23:17 compute-0 sshd-session[246444]: Disconnected from authenticating user root 103.250.11.249 port 50066 [preauth]
Feb 19 20:23:17 compute-0 nova_compute[188777]: 2026-02-19 20:23:17.922 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:23:18 compute-0 nova_compute[188777]: 2026-02-19 20:23:18.680 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:23:22 compute-0 nova_compute[188777]: 2026-02-19 20:23:22.925 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:23:23 compute-0 nova_compute[188777]: 2026-02-19 20:23:23.682 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:23:27 compute-0 podman[246447]: 2026-02-19 20:23:27.426800893 +0000 UTC m=+0.112031273 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, managed_by=edpm_ansible, io.buildah.version=1.33.7, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, distribution-scope=public, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, release=1770267347, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Feb 19 20:23:27 compute-0 podman[246448]: 2026-02-19 20:23:27.439388266 +0000 UTC m=+0.112770177 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 19 20:23:27 compute-0 nova_compute[188777]: 2026-02-19 20:23:27.928 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:23:28 compute-0 nova_compute[188777]: 2026-02-19 20:23:28.685 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:23:29 compute-0 podman[204724]: time="2026-02-19T20:23:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:23:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:23:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 19 20:23:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:23:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4371 "" "Go-http-client/1.1"
Feb 19 20:23:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:23:30.433 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:23:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:23:30.434 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:23:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:23:30.435 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:23:31 compute-0 openstack_network_exporter[207898]: ERROR   20:23:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:23:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:23:31 compute-0 openstack_network_exporter[207898]: ERROR   20:23:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:23:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:23:32 compute-0 podman[246490]: 2026-02-19 20:23:32.435464483 +0000 UTC m=+0.111204068 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 20:23:32 compute-0 nova_compute[188777]: 2026-02-19 20:23:32.931 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:23:33 compute-0 nova_compute[188777]: 2026-02-19 20:23:33.688 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:23:35 compute-0 podman[246509]: 2026-02-19 20:23:35.458309542 +0000 UTC m=+0.129696983 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, config_id=kepler, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, managed_by=edpm_ansible, vcs-type=git, release-0.7.12=, architecture=x86_64, com.redhat.component=ubi9-container, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.29.0, release=1214.1726694543)
Feb 19 20:23:35 compute-0 podman[246510]: 2026-02-19 20:23:35.468679316 +0000 UTC m=+0.134663649 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:23:37 compute-0 nova_compute[188777]: 2026-02-19 20:23:37.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:23:37 compute-0 nova_compute[188777]: 2026-02-19 20:23:37.264 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:23:37 compute-0 nova_compute[188777]: 2026-02-19 20:23:37.935 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:23:38 compute-0 nova_compute[188777]: 2026-02-19 20:23:38.692 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:23:39 compute-0 podman[246548]: 2026-02-19 20:23:39.375606301 +0000 UTC m=+0.061621002 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 19 20:23:40 compute-0 nova_compute[188777]: 2026-02-19 20:23:40.266 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:23:42 compute-0 podman[246570]: 2026-02-19 20:23:42.439012893 +0000 UTC m=+0.119907419 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 19 20:23:42 compute-0 nova_compute[188777]: 2026-02-19 20:23:42.937 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:23:43 compute-0 nova_compute[188777]: 2026-02-19 20:23:43.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:23:43 compute-0 nova_compute[188777]: 2026-02-19 20:23:43.696 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:23:44 compute-0 nova_compute[188777]: 2026-02-19 20:23:44.260 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:23:44 compute-0 nova_compute[188777]: 2026-02-19 20:23:44.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:23:44 compute-0 podman[246590]: 2026-02-19 20:23:44.857086918 +0000 UTC m=+0.162093284 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 20:23:45 compute-0 nova_compute[188777]: 2026-02-19 20:23:45.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:23:45 compute-0 nova_compute[188777]: 2026-02-19 20:23:45.295 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:23:45 compute-0 nova_compute[188777]: 2026-02-19 20:23:45.296 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:23:45 compute-0 nova_compute[188777]: 2026-02-19 20:23:45.297 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:23:45 compute-0 nova_compute[188777]: 2026-02-19 20:23:45.298 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:23:45 compute-0 nova_compute[188777]: 2026-02-19 20:23:45.423 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:23:45 compute-0 nova_compute[188777]: 2026-02-19 20:23:45.486 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:23:45 compute-0 nova_compute[188777]: 2026-02-19 20:23:45.487 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:23:45 compute-0 nova_compute[188777]: 2026-02-19 20:23:45.535 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:23:45 compute-0 nova_compute[188777]: 2026-02-19 20:23:45.537 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:23:45 compute-0 nova_compute[188777]: 2026-02-19 20:23:45.614 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:23:45 compute-0 nova_compute[188777]: 2026-02-19 20:23:45.616 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:23:45 compute-0 nova_compute[188777]: 2026-02-19 20:23:45.703 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:23:45 compute-0 nova_compute[188777]: 2026-02-19 20:23:45.711 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:23:45 compute-0 nova_compute[188777]: 2026-02-19 20:23:45.784 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:23:45 compute-0 nova_compute[188777]: 2026-02-19 20:23:45.785 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:23:45 compute-0 nova_compute[188777]: 2026-02-19 20:23:45.846 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:23:45 compute-0 nova_compute[188777]: 2026-02-19 20:23:45.847 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:23:45 compute-0 nova_compute[188777]: 2026-02-19 20:23:45.922 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.eph0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:23:45 compute-0 nova_compute[188777]: 2026-02-19 20:23:45.923 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:23:45 compute-0 nova_compute[188777]: 2026-02-19 20:23:45.971 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.eph0 --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:23:45 compute-0 nova_compute[188777]: 2026-02-19 20:23:45.977 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:23:46 compute-0 nova_compute[188777]: 2026-02-19 20:23:46.041 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:23:46 compute-0 nova_compute[188777]: 2026-02-19 20:23:46.042 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:23:46 compute-0 nova_compute[188777]: 2026-02-19 20:23:46.100 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:23:46 compute-0 nova_compute[188777]: 2026-02-19 20:23:46.102 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:23:46 compute-0 nova_compute[188777]: 2026-02-19 20:23:46.157 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:23:46 compute-0 nova_compute[188777]: 2026-02-19 20:23:46.158 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:23:46 compute-0 nova_compute[188777]: 2026-02-19 20:23:46.216 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:23:46 compute-0 nova_compute[188777]: 2026-02-19 20:23:46.223 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:23:46 compute-0 nova_compute[188777]: 2026-02-19 20:23:46.306 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:23:46 compute-0 nova_compute[188777]: 2026-02-19 20:23:46.307 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:23:46 compute-0 nova_compute[188777]: 2026-02-19 20:23:46.355 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:23:46 compute-0 nova_compute[188777]: 2026-02-19 20:23:46.356 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:23:46 compute-0 nova_compute[188777]: 2026-02-19 20:23:46.420 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:23:46 compute-0 nova_compute[188777]: 2026-02-19 20:23:46.420 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:23:46 compute-0 nova_compute[188777]: 2026-02-19 20:23:46.464 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba/disk.eph0 --force-share --output=json" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:23:46 compute-0 nova_compute[188777]: 2026-02-19 20:23:46.822 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:23:46 compute-0 nova_compute[188777]: 2026-02-19 20:23:46.824 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4622MB free_disk=72.18170547485352GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:23:46 compute-0 nova_compute[188777]: 2026-02-19 20:23:46.824 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:23:46 compute-0 nova_compute[188777]: 2026-02-19 20:23:46.824 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:23:46 compute-0 nova_compute[188777]: 2026-02-19 20:23:46.933 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 5aaac42d-946d-4c6f-9bde-23b8b6613b59 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:23:46 compute-0 nova_compute[188777]: 2026-02-19 20:23:46.934 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 0975826c-6016-48c8-a7dd-1b10a32f91ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:23:46 compute-0 nova_compute[188777]: 2026-02-19 20:23:46.934 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 14ed9fe0-b150-4bd8-852e-7f2f62d4374b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:23:46 compute-0 nova_compute[188777]: 2026-02-19 20:23:46.934 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 1cda3ab8-0805-4bcd-955c-996994fd3cb4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:23:46 compute-0 nova_compute[188777]: 2026-02-19 20:23:46.934 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:23:46 compute-0 nova_compute[188777]: 2026-02-19 20:23:46.934 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:23:46 compute-0 nova_compute[188777]: 2026-02-19 20:23:46.951 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Refreshing inventories for resource provider c266959e-952e-41ad-bc2e-56513f39ec2d _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 19 20:23:46 compute-0 nova_compute[188777]: 2026-02-19 20:23:46.969 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Updating ProviderTree inventory for provider c266959e-952e-41ad-bc2e-56513f39ec2d from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 19 20:23:46 compute-0 nova_compute[188777]: 2026-02-19 20:23:46.969 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Updating inventory in ProviderTree for provider c266959e-952e-41ad-bc2e-56513f39ec2d with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 19 20:23:46 compute-0 nova_compute[188777]: 2026-02-19 20:23:46.991 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Refreshing aggregate associations for resource provider c266959e-952e-41ad-bc2e-56513f39ec2d, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 19 20:23:47 compute-0 nova_compute[188777]: 2026-02-19 20:23:47.016 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Refreshing trait associations for resource provider c266959e-952e-41ad-bc2e-56513f39ec2d, traits: HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,HW_CPU_X86_SHA,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NODE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,HW_CPU_X86_F16C,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_ABM,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_MMX,HW_CPU_X86_BMI2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 19 20:23:47 compute-0 nova_compute[188777]: 2026-02-19 20:23:47.125 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:23:47 compute-0 nova_compute[188777]: 2026-02-19 20:23:47.154 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:23:47 compute-0 nova_compute[188777]: 2026-02-19 20:23:47.156 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:23:47 compute-0 nova_compute[188777]: 2026-02-19 20:23:47.156 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.332s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:23:47 compute-0 nova_compute[188777]: 2026-02-19 20:23:47.940 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:23:48 compute-0 nova_compute[188777]: 2026-02-19 20:23:48.156 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:23:48 compute-0 nova_compute[188777]: 2026-02-19 20:23:48.157 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:23:48 compute-0 nova_compute[188777]: 2026-02-19 20:23:48.574 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "refresh_cache-14ed9fe0-b150-4bd8-852e-7f2f62d4374b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:23:48 compute-0 nova_compute[188777]: 2026-02-19 20:23:48.575 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquired lock "refresh_cache-14ed9fe0-b150-4bd8-852e-7f2f62d4374b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:23:48 compute-0 nova_compute[188777]: 2026-02-19 20:23:48.575 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 19 20:23:48 compute-0 nova_compute[188777]: 2026-02-19 20:23:48.699 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:23:49 compute-0 nova_compute[188777]: 2026-02-19 20:23:49.775 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Updating instance_info_cache with network_info: [{"id": "9838caff-8a65-491d-8b0d-3fb3d10c299c", "address": "fa:16:3e:9c:8b:13", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.86", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9838caff-8a", "ovs_interfaceid": "9838caff-8a65-491d-8b0d-3fb3d10c299c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:23:49 compute-0 nova_compute[188777]: 2026-02-19 20:23:49.792 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Releasing lock "refresh_cache-14ed9fe0-b150-4bd8-852e-7f2f62d4374b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:23:49 compute-0 nova_compute[188777]: 2026-02-19 20:23:49.793 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 19 20:23:49 compute-0 nova_compute[188777]: 2026-02-19 20:23:49.794 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:23:50 compute-0 nova_compute[188777]: 2026-02-19 20:23:50.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:23:52 compute-0 nova_compute[188777]: 2026-02-19 20:23:52.947 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:23:53 compute-0 nova_compute[188777]: 2026-02-19 20:23:53.702 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:23:57 compute-0 sshd-session[246664]: Received disconnect from 158.180.74.7 port 17328:11: Bye Bye [preauth]
Feb 19 20:23:57 compute-0 sshd-session[246664]: Disconnected from authenticating user root 158.180.74.7 port 17328 [preauth]
Feb 19 20:23:57 compute-0 nova_compute[188777]: 2026-02-19 20:23:57.951 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:23:58 compute-0 podman[246667]: 2026-02-19 20:23:58.415589563 +0000 UTC m=+0.097305724 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 19 20:23:58 compute-0 podman[246666]: 2026-02-19 20:23:58.420016231 +0000 UTC m=+0.101046811 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, config_id=openstack_network_exporter, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7, build-date=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, architecture=x86_64)
Feb 19 20:23:58 compute-0 nova_compute[188777]: 2026-02-19 20:23:58.706 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:23:59 compute-0 podman[204724]: time="2026-02-19T20:23:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:23:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:23:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 19 20:23:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:23:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4374 "" "Go-http-client/1.1"
Feb 19 20:24:01 compute-0 openstack_network_exporter[207898]: ERROR   20:24:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:24:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:24:01 compute-0 openstack_network_exporter[207898]: ERROR   20:24:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:24:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:24:02 compute-0 nova_compute[188777]: 2026-02-19 20:24:02.953 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:24:03 compute-0 podman[246711]: 2026-02-19 20:24:03.432111457 +0000 UTC m=+0.100385780 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:24:03 compute-0 nova_compute[188777]: 2026-02-19 20:24:03.710 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:24:06 compute-0 podman[246731]: 2026-02-19 20:24:06.399193854 +0000 UTC m=+0.081831401 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:24:06 compute-0 podman[246730]: 2026-02-19 20:24:06.42310547 +0000 UTC m=+0.111001962 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, distribution-scope=public, managed_by=edpm_ansible, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, name=ubi9, vendor=Red Hat, Inc., version=9.4, architecture=x86_64, maintainer=Red Hat, Inc.)
Feb 19 20:24:07 compute-0 nova_compute[188777]: 2026-02-19 20:24:07.956 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:24:08 compute-0 nova_compute[188777]: 2026-02-19 20:24:08.712 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:24:10 compute-0 podman[246768]: 2026-02-19 20:24:10.408326107 +0000 UTC m=+0.087091606 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 19 20:24:12 compute-0 nova_compute[188777]: 2026-02-19 20:24:12.957 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:24:13 compute-0 podman[246792]: 2026-02-19 20:24:13.436744239 +0000 UTC m=+0.122473590 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, container_name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=ceilometer_agent_compute)
Feb 19 20:24:13 compute-0 nova_compute[188777]: 2026-02-19 20:24:13.716 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:24:15 compute-0 podman[246811]: 2026-02-19 20:24:15.473315213 +0000 UTC m=+0.150178842 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 19 20:24:17 compute-0 nova_compute[188777]: 2026-02-19 20:24:17.960 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:24:18 compute-0 nova_compute[188777]: 2026-02-19 20:24:18.719 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:24:18 compute-0 nova_compute[188777]: 2026-02-19 20:24:18.897 188781 DEBUG oslo_concurrency.lockutils [None req-038eed1a-0fb4-453c-8a79-601576244a0c 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "0975826c-6016-48c8-a7dd-1b10a32f91ba" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:24:18 compute-0 nova_compute[188777]: 2026-02-19 20:24:18.898 188781 DEBUG oslo_concurrency.lockutils [None req-038eed1a-0fb4-453c-8a79-601576244a0c 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "0975826c-6016-48c8-a7dd-1b10a32f91ba" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:24:18 compute-0 nova_compute[188777]: 2026-02-19 20:24:18.898 188781 DEBUG oslo_concurrency.lockutils [None req-038eed1a-0fb4-453c-8a79-601576244a0c 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "0975826c-6016-48c8-a7dd-1b10a32f91ba-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:24:18 compute-0 nova_compute[188777]: 2026-02-19 20:24:18.898 188781 DEBUG oslo_concurrency.lockutils [None req-038eed1a-0fb4-453c-8a79-601576244a0c 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "0975826c-6016-48c8-a7dd-1b10a32f91ba-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:24:18 compute-0 nova_compute[188777]: 2026-02-19 20:24:18.899 188781 DEBUG oslo_concurrency.lockutils [None req-038eed1a-0fb4-453c-8a79-601576244a0c 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "0975826c-6016-48c8-a7dd-1b10a32f91ba-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:24:18 compute-0 nova_compute[188777]: 2026-02-19 20:24:18.900 188781 INFO nova.compute.manager [None req-038eed1a-0fb4-453c-8a79-601576244a0c 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Terminating instance
Feb 19 20:24:18 compute-0 nova_compute[188777]: 2026-02-19 20:24:18.901 188781 DEBUG nova.compute.manager [None req-038eed1a-0fb4-453c-8a79-601576244a0c 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 19 20:24:18 compute-0 kernel: tapdb2ce91f-77 (unregistering): left promiscuous mode
Feb 19 20:24:18 compute-0 NetworkManager[57033]: <info>  [1771532658.9490] device (tapdb2ce91f-77): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 19 20:24:18 compute-0 nova_compute[188777]: 2026-02-19 20:24:18.955 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:24:18 compute-0 ovn_controller[98843]: 2026-02-19T20:24:18Z|00050|binding|INFO|Releasing lport db2ce91f-7740-44a2-bab1-8455e2dfddde from this chassis (sb_readonly=0)
Feb 19 20:24:18 compute-0 ovn_controller[98843]: 2026-02-19T20:24:18Z|00051|binding|INFO|Setting lport db2ce91f-7740-44a2-bab1-8455e2dfddde down in Southbound
Feb 19 20:24:18 compute-0 ovn_controller[98843]: 2026-02-19T20:24:18Z|00052|binding|INFO|Removing iface tapdb2ce91f-77 ovn-installed in OVS
Feb 19 20:24:18 compute-0 nova_compute[188777]: 2026-02-19 20:24:18.965 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:24:18 compute-0 nova_compute[188777]: 2026-02-19 20:24:18.970 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:24:18 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:24:18.977 108175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:93:1a 192.168.0.213'], port_security=['fa:16:3e:4d:93:1a 192.168.0.213'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-5pf5gh4amqsx-kmyzbqhhqloy-unhgieiyt6e3-port-4pllgrspjkj2', 'neutron:cidrs': '192.168.0.213/24', 'neutron:device_id': '0975826c-6016-48c8-a7dd-1b10a32f91ba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec82c3b7-5389-43ab-a939-ce6cd12f9681', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-5pf5gh4amqsx-kmyzbqhhqloy-unhgieiyt6e3-port-4pllgrspjkj2', 'neutron:project_id': '59f01dee51a74ac1a9f82733f591827d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '46d7cf50-a73c-415e-96c4-398ffee7ce2d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.212', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61958255-2fb8-4c55-809a-ee04d4cf034a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>], logical_port=db2ce91f-7740-44a2-bab1-8455e2dfddde) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 19 20:24:18 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:24:18.981 108175 INFO neutron.agent.ovn.metadata.agent [-] Port db2ce91f-7740-44a2-bab1-8455e2dfddde in datapath ec82c3b7-5389-43ab-a939-ce6cd12f9681 unbound from our chassis
Feb 19 20:24:18 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:24:18.983 108175 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec82c3b7-5389-43ab-a939-ce6cd12f9681
Feb 19 20:24:18 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:24:18.995 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[de78f29a-bce1-48f6-8a0b-03c9ce938aa4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:24:18 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Feb 19 20:24:18 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 6min 56.290s CPU time.
Feb 19 20:24:19 compute-0 systemd-machined[158158]: Machine qemu-2-instance-00000002 terminated.
Feb 19 20:24:19 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:24:19.025 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[6a4728e4-10ab-4ca8-94e0-99eb5b9c7891]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:24:19 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:24:19.029 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[00b67520-ac66-4498-b6ea-726174cacf89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:24:19 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:24:19.052 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[ca542beb-a1f7-4910-a0e9-658e1ce479c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:24:19 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:24:19.066 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[6ebe8d37-f127-4943-b597-7104e5041c0b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec82c3b7-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8a:e7:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 12, 'rx_bytes': 658, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 12, 'rx_bytes': 658, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 348344, 'reachable_time': 39446, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 246849, 'error': None, 'target': 'ovnmeta-ec82c3b7-5389-43ab-a939-ce6cd12f9681', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:24:19 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:24:19.078 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[e7b5a758-9a66-411a-b9a2-381be95649fc]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapec82c3b7-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 348361, 'tstamp': 348361}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 246850, 'error': None, 'target': 'ovnmeta-ec82c3b7-5389-43ab-a939-ce6cd12f9681', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapec82c3b7-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 348365, 'tstamp': 348365}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 246850, 'error': None, 'target': 'ovnmeta-ec82c3b7-5389-43ab-a939-ce6cd12f9681', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:24:19 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:24:19.080 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec82c3b7-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:24:19 compute-0 nova_compute[188777]: 2026-02-19 20:24:19.082 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:24:19 compute-0 nova_compute[188777]: 2026-02-19 20:24:19.086 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:24:19 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:24:19.087 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec82c3b7-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:24:19 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:24:19.087 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 19 20:24:19 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:24:19.088 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec82c3b7-50, col_values=(('external_ids', {'iface-id': 'a1c774de-4b7d-47b5-b88c-3f5d9b5c3dce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:24:19 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:24:19.089 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 19 20:24:19 compute-0 nova_compute[188777]: 2026-02-19 20:24:19.181 188781 INFO nova.virt.libvirt.driver [-] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Instance destroyed successfully.
Feb 19 20:24:19 compute-0 nova_compute[188777]: 2026-02-19 20:24:19.182 188781 DEBUG nova.objects.instance [None req-038eed1a-0fb4-453c-8a79-601576244a0c 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lazy-loading 'resources' on Instance uuid 0975826c-6016-48c8-a7dd-1b10a32f91ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:24:19 compute-0 nova_compute[188777]: 2026-02-19 20:24:19.197 188781 DEBUG nova.virt.libvirt.vif [None req-038eed1a-0fb4-453c-8a79-601576244a0c 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-19T20:13:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-h4amqsx-kmyzbqhhqloy-unhgieiyt6e3-vnf-p7rghgh5js3a',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-h4amqsx-kmyzbqhhqloy-unhgieiyt6e3-vnf-p7rghgh5js3a',id=2,image_ref='e1a79c75-2fa3-410d-9c4c-91db3eeca51d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-19T20:13:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='78adc0ea-8772-4283-8bd6-6dbdcecee09e'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='59f01dee51a74ac1a9f82733f591827d',ramdisk_id='',reservation_id='r-cv4t54vg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='e1a79c75-2fa3-410d-9c4c-91db3eeca51d',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-19T20:13:59Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0xMDE3NjI1MzM3ODk5ODk3NDg3PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTEwMTc2MjUzMzc4OTk4OTc0ODc9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MTAxNzYyNTMzNzg5OTg5NzQ4Nz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTEwMTc2MjUzMzc4OTk4OTc0ODc9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0xMDE3NjI1MzM3ODk5ODk3NDg3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0xMDE3NjI1MzM3ODk5ODk3NDg3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Feb 19 20:24:19 compute-0 nova_compute[188777]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MTAxNzYyNTMzNzg5OTg5NzQ4Nz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTEwMTc2MjUzMzc4OTk4OTc0ODc9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0xMDE3NjI1MzM3ODk5ODk3NDg3PT0tLQo=',user_id='9f5597a45dc34ee19bcfe938afde768f',uuid=0975826c-6016-48c8-a7dd-1b10a32f91ba,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "db2ce91f-7740-44a2-bab1-8455e2dfddde", "address": "fa:16:3e:4d:93:1a", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.213", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb2ce91f-77", "ovs_interfaceid": "db2ce91f-7740-44a2-bab1-8455e2dfddde", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 19 20:24:19 compute-0 nova_compute[188777]: 2026-02-19 20:24:19.198 188781 DEBUG nova.network.os_vif_util [None req-038eed1a-0fb4-453c-8a79-601576244a0c 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Converting VIF {"id": "db2ce91f-7740-44a2-bab1-8455e2dfddde", "address": "fa:16:3e:4d:93:1a", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.213", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb2ce91f-77", "ovs_interfaceid": "db2ce91f-7740-44a2-bab1-8455e2dfddde", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 19 20:24:19 compute-0 nova_compute[188777]: 2026-02-19 20:24:19.198 188781 DEBUG nova.network.os_vif_util [None req-038eed1a-0fb4-453c-8a79-601576244a0c 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4d:93:1a,bridge_name='br-int',has_traffic_filtering=True,id=db2ce91f-7740-44a2-bab1-8455e2dfddde,network=Network(ec82c3b7-5389-43ab-a939-ce6cd12f9681),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdb2ce91f-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 19 20:24:19 compute-0 nova_compute[188777]: 2026-02-19 20:24:19.199 188781 DEBUG os_vif [None req-038eed1a-0fb4-453c-8a79-601576244a0c 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4d:93:1a,bridge_name='br-int',has_traffic_filtering=True,id=db2ce91f-7740-44a2-bab1-8455e2dfddde,network=Network(ec82c3b7-5389-43ab-a939-ce6cd12f9681),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdb2ce91f-77') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 19 20:24:19 compute-0 nova_compute[188777]: 2026-02-19 20:24:19.200 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:24:19 compute-0 nova_compute[188777]: 2026-02-19 20:24:19.201 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdb2ce91f-77, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:24:19 compute-0 nova_compute[188777]: 2026-02-19 20:24:19.204 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:24:19 compute-0 nova_compute[188777]: 2026-02-19 20:24:19.206 188781 INFO os_vif [None req-038eed1a-0fb4-453c-8a79-601576244a0c 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4d:93:1a,bridge_name='br-int',has_traffic_filtering=True,id=db2ce91f-7740-44a2-bab1-8455e2dfddde,network=Network(ec82c3b7-5389-43ab-a939-ce6cd12f9681),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdb2ce91f-77')
Feb 19 20:24:19 compute-0 nova_compute[188777]: 2026-02-19 20:24:19.208 188781 INFO nova.virt.libvirt.driver [None req-038eed1a-0fb4-453c-8a79-601576244a0c 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Deleting instance files /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba_del
Feb 19 20:24:19 compute-0 nova_compute[188777]: 2026-02-19 20:24:19.209 188781 INFO nova.virt.libvirt.driver [None req-038eed1a-0fb4-453c-8a79-601576244a0c 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Deletion of /var/lib/nova/instances/0975826c-6016-48c8-a7dd-1b10a32f91ba_del complete
Feb 19 20:24:19 compute-0 rsyslogd[239379]: message too long (8192) with configured size 8096, begin of message is: 2026-02-19 20:24:19.197 188781 DEBUG nova.virt.libvirt.vif [None req-038eed1a-0f [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 19 20:24:19 compute-0 nova_compute[188777]: 2026-02-19 20:24:19.316 188781 DEBUG nova.compute.manager [req-ceae087a-24ae-4b2d-8788-378992e68b88 req-3e9aaf21-9b93-4ea0-9d39-c0d9f790e9db 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Received event network-vif-unplugged-db2ce91f-7740-44a2-bab1-8455e2dfddde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:24:19 compute-0 nova_compute[188777]: 2026-02-19 20:24:19.316 188781 DEBUG oslo_concurrency.lockutils [req-ceae087a-24ae-4b2d-8788-378992e68b88 req-3e9aaf21-9b93-4ea0-9d39-c0d9f790e9db 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "0975826c-6016-48c8-a7dd-1b10a32f91ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:24:19 compute-0 nova_compute[188777]: 2026-02-19 20:24:19.317 188781 DEBUG oslo_concurrency.lockutils [req-ceae087a-24ae-4b2d-8788-378992e68b88 req-3e9aaf21-9b93-4ea0-9d39-c0d9f790e9db 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "0975826c-6016-48c8-a7dd-1b10a32f91ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:24:19 compute-0 nova_compute[188777]: 2026-02-19 20:24:19.317 188781 DEBUG oslo_concurrency.lockutils [req-ceae087a-24ae-4b2d-8788-378992e68b88 req-3e9aaf21-9b93-4ea0-9d39-c0d9f790e9db 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "0975826c-6016-48c8-a7dd-1b10a32f91ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:24:19 compute-0 nova_compute[188777]: 2026-02-19 20:24:19.317 188781 DEBUG nova.compute.manager [req-ceae087a-24ae-4b2d-8788-378992e68b88 req-3e9aaf21-9b93-4ea0-9d39-c0d9f790e9db 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] No waiting events found dispatching network-vif-unplugged-db2ce91f-7740-44a2-bab1-8455e2dfddde pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 19 20:24:19 compute-0 nova_compute[188777]: 2026-02-19 20:24:19.317 188781 DEBUG nova.compute.manager [req-ceae087a-24ae-4b2d-8788-378992e68b88 req-3e9aaf21-9b93-4ea0-9d39-c0d9f790e9db 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Received event network-vif-unplugged-db2ce91f-7740-44a2-bab1-8455e2dfddde for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 19 20:24:19 compute-0 nova_compute[188777]: 2026-02-19 20:24:19.318 188781 DEBUG nova.virt.libvirt.host [None req-038eed1a-0fb4-453c-8a79-601576244a0c 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Feb 19 20:24:19 compute-0 nova_compute[188777]: 2026-02-19 20:24:19.318 188781 INFO nova.virt.libvirt.host [None req-038eed1a-0fb4-453c-8a79-601576244a0c 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] UEFI support detected
Feb 19 20:24:19 compute-0 nova_compute[188777]: 2026-02-19 20:24:19.321 188781 INFO nova.compute.manager [None req-038eed1a-0fb4-453c-8a79-601576244a0c 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Took 0.42 seconds to destroy the instance on the hypervisor.
Feb 19 20:24:19 compute-0 nova_compute[188777]: 2026-02-19 20:24:19.321 188781 DEBUG oslo.service.loopingcall [None req-038eed1a-0fb4-453c-8a79-601576244a0c 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 19 20:24:19 compute-0 nova_compute[188777]: 2026-02-19 20:24:19.321 188781 DEBUG nova.compute.manager [-] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 19 20:24:19 compute-0 nova_compute[188777]: 2026-02-19 20:24:19.321 188781 DEBUG nova.network.neutron [-] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 19 20:24:19 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:24:19.593 108175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1e:ad:15', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:0d:ba:1d:25:53'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 19 20:24:19 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:24:19.594 108175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 19 20:24:19 compute-0 nova_compute[188777]: 2026-02-19 20:24:19.595 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:24:20 compute-0 nova_compute[188777]: 2026-02-19 20:24:20.661 188781 DEBUG nova.network.neutron [-] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:24:20 compute-0 nova_compute[188777]: 2026-02-19 20:24:20.681 188781 INFO nova.compute.manager [-] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Took 1.36 seconds to deallocate network for instance.
Feb 19 20:24:20 compute-0 nova_compute[188777]: 2026-02-19 20:24:20.749 188781 DEBUG oslo_concurrency.lockutils [None req-038eed1a-0fb4-453c-8a79-601576244a0c 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:24:20 compute-0 nova_compute[188777]: 2026-02-19 20:24:20.750 188781 DEBUG oslo_concurrency.lockutils [None req-038eed1a-0fb4-453c-8a79-601576244a0c 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:24:20 compute-0 nova_compute[188777]: 2026-02-19 20:24:20.870 188781 DEBUG nova.compute.provider_tree [None req-038eed1a-0fb4-453c-8a79-601576244a0c 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:24:20 compute-0 nova_compute[188777]: 2026-02-19 20:24:20.893 188781 DEBUG nova.scheduler.client.report [None req-038eed1a-0fb4-453c-8a79-601576244a0c 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:24:20 compute-0 nova_compute[188777]: 2026-02-19 20:24:20.920 188781 DEBUG oslo_concurrency.lockutils [None req-038eed1a-0fb4-453c-8a79-601576244a0c 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:24:20 compute-0 nova_compute[188777]: 2026-02-19 20:24:20.961 188781 INFO nova.scheduler.client.report [None req-038eed1a-0fb4-453c-8a79-601576244a0c 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Deleted allocations for instance 0975826c-6016-48c8-a7dd-1b10a32f91ba
Feb 19 20:24:21 compute-0 nova_compute[188777]: 2026-02-19 20:24:21.059 188781 DEBUG oslo_concurrency.lockutils [None req-038eed1a-0fb4-453c-8a79-601576244a0c 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "0975826c-6016-48c8-a7dd-1b10a32f91ba" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.161s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:24:21 compute-0 nova_compute[188777]: 2026-02-19 20:24:21.396 188781 DEBUG nova.compute.manager [req-fccff9d6-f276-4497-aa34-90c6e547e28c req-9e24fcf2-c32c-4742-9a32-64d5c07c1fc3 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Received event network-vif-plugged-db2ce91f-7740-44a2-bab1-8455e2dfddde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:24:21 compute-0 nova_compute[188777]: 2026-02-19 20:24:21.397 188781 DEBUG oslo_concurrency.lockutils [req-fccff9d6-f276-4497-aa34-90c6e547e28c req-9e24fcf2-c32c-4742-9a32-64d5c07c1fc3 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "0975826c-6016-48c8-a7dd-1b10a32f91ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:24:21 compute-0 nova_compute[188777]: 2026-02-19 20:24:21.397 188781 DEBUG oslo_concurrency.lockutils [req-fccff9d6-f276-4497-aa34-90c6e547e28c req-9e24fcf2-c32c-4742-9a32-64d5c07c1fc3 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "0975826c-6016-48c8-a7dd-1b10a32f91ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:24:21 compute-0 nova_compute[188777]: 2026-02-19 20:24:21.398 188781 DEBUG oslo_concurrency.lockutils [req-fccff9d6-f276-4497-aa34-90c6e547e28c req-9e24fcf2-c32c-4742-9a32-64d5c07c1fc3 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "0975826c-6016-48c8-a7dd-1b10a32f91ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:24:21 compute-0 nova_compute[188777]: 2026-02-19 20:24:21.398 188781 DEBUG nova.compute.manager [req-fccff9d6-f276-4497-aa34-90c6e547e28c req-9e24fcf2-c32c-4742-9a32-64d5c07c1fc3 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] No waiting events found dispatching network-vif-plugged-db2ce91f-7740-44a2-bab1-8455e2dfddde pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 19 20:24:21 compute-0 nova_compute[188777]: 2026-02-19 20:24:21.398 188781 WARNING nova.compute.manager [req-fccff9d6-f276-4497-aa34-90c6e547e28c req-9e24fcf2-c32c-4742-9a32-64d5c07c1fc3 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Received unexpected event network-vif-plugged-db2ce91f-7740-44a2-bab1-8455e2dfddde for instance with vm_state deleted and task_state None.
Feb 19 20:24:21 compute-0 nova_compute[188777]: 2026-02-19 20:24:21.399 188781 DEBUG nova.compute.manager [req-fccff9d6-f276-4497-aa34-90c6e547e28c req-9e24fcf2-c32c-4742-9a32-64d5c07c1fc3 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Received event network-changed-db2ce91f-7740-44a2-bab1-8455e2dfddde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:24:21 compute-0 nova_compute[188777]: 2026-02-19 20:24:21.399 188781 DEBUG nova.compute.manager [req-fccff9d6-f276-4497-aa34-90c6e547e28c req-9e24fcf2-c32c-4742-9a32-64d5c07c1fc3 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Refreshing instance network info cache due to event network-changed-db2ce91f-7740-44a2-bab1-8455e2dfddde. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 19 20:24:21 compute-0 nova_compute[188777]: 2026-02-19 20:24:21.400 188781 DEBUG oslo_concurrency.lockutils [req-fccff9d6-f276-4497-aa34-90c6e547e28c req-9e24fcf2-c32c-4742-9a32-64d5c07c1fc3 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "refresh_cache-0975826c-6016-48c8-a7dd-1b10a32f91ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:24:21 compute-0 nova_compute[188777]: 2026-02-19 20:24:21.400 188781 DEBUG oslo_concurrency.lockutils [req-fccff9d6-f276-4497-aa34-90c6e547e28c req-9e24fcf2-c32c-4742-9a32-64d5c07c1fc3 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquired lock "refresh_cache-0975826c-6016-48c8-a7dd-1b10a32f91ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:24:21 compute-0 nova_compute[188777]: 2026-02-19 20:24:21.401 188781 DEBUG nova.network.neutron [req-fccff9d6-f276-4497-aa34-90c6e547e28c req-9e24fcf2-c32c-4742-9a32-64d5c07c1fc3 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Refreshing network info cache for port db2ce91f-7740-44a2-bab1-8455e2dfddde _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 19 20:24:21 compute-0 nova_compute[188777]: 2026-02-19 20:24:21.579 188781 DEBUG nova.network.neutron [req-fccff9d6-f276-4497-aa34-90c6e547e28c req-9e24fcf2-c32c-4742-9a32-64d5c07c1fc3 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 19 20:24:22 compute-0 nova_compute[188777]: 2026-02-19 20:24:22.898 188781 DEBUG nova.network.neutron [req-fccff9d6-f276-4497-aa34-90c6e547e28c req-9e24fcf2-c32c-4742-9a32-64d5c07c1fc3 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Feb 19 20:24:22 compute-0 nova_compute[188777]: 2026-02-19 20:24:22.899 188781 DEBUG oslo_concurrency.lockutils [req-fccff9d6-f276-4497-aa34-90c6e547e28c req-9e24fcf2-c32c-4742-9a32-64d5c07c1fc3 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Releasing lock "refresh_cache-0975826c-6016-48c8-a7dd-1b10a32f91ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:24:22 compute-0 nova_compute[188777]: 2026-02-19 20:24:22.963 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:24:24 compute-0 nova_compute[188777]: 2026-02-19 20:24:24.204 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:24:26 compute-0 sshd-session[246874]: Invalid user n8n from 103.119.94.10 port 42508
Feb 19 20:24:27 compute-0 nova_compute[188777]: 2026-02-19 20:24:27.965 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:24:28 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:24:28.599 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e2fe6bb6-fad0-4563-8388-215a30f03e3f, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:24:28 compute-0 sshd-session[246874]: Received disconnect from 103.119.94.10 port 42508:11: Bye Bye [preauth]
Feb 19 20:24:28 compute-0 sshd-session[246874]: Disconnected from invalid user n8n 103.119.94.10 port 42508 [preauth]
Feb 19 20:24:29 compute-0 nova_compute[188777]: 2026-02-19 20:24:29.207 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:24:29 compute-0 podman[246877]: 2026-02-19 20:24:29.438691729 +0000 UTC m=+0.101582488 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vcs-type=git, org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, config_id=openstack_network_exporter, io.buildah.version=1.33.7, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, distribution-scope=public, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 19 20:24:29 compute-0 podman[246878]: 2026-02-19 20:24:29.457246637 +0000 UTC m=+0.122926492 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 19 20:24:29 compute-0 podman[204724]: time="2026-02-19T20:24:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:24:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:24:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 19 20:24:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:24:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4368 "" "Go-http-client/1.1"
Feb 19 20:24:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:24:30.435 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:24:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:24:30.436 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:24:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:24:30.437 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:24:31 compute-0 openstack_network_exporter[207898]: ERROR   20:24:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:24:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:24:31 compute-0 openstack_network_exporter[207898]: ERROR   20:24:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:24:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:24:31 compute-0 sshd-session[246921]: Invalid user sammy from 83.235.16.111 port 46966
Feb 19 20:24:31 compute-0 sshd-session[246921]: Received disconnect from 83.235.16.111 port 46966:11: Bye Bye [preauth]
Feb 19 20:24:31 compute-0 sshd-session[246921]: Disconnected from invalid user sammy 83.235.16.111 port 46966 [preauth]
Feb 19 20:24:32 compute-0 nova_compute[188777]: 2026-02-19 20:24:32.968 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:24:34 compute-0 nova_compute[188777]: 2026-02-19 20:24:34.176 188781 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1771532659.1751046, 0975826c-6016-48c8-a7dd-1b10a32f91ba => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:24:34 compute-0 nova_compute[188777]: 2026-02-19 20:24:34.177 188781 INFO nova.compute.manager [-] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] VM Stopped (Lifecycle Event)
Feb 19 20:24:34 compute-0 nova_compute[188777]: 2026-02-19 20:24:34.202 188781 DEBUG nova.compute.manager [None req-2cd7647b-fdc0-4d63-a7d0-8490ae93ba6a - - - - - -] [instance: 0975826c-6016-48c8-a7dd-1b10a32f91ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:24:34 compute-0 nova_compute[188777]: 2026-02-19 20:24:34.210 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:24:34 compute-0 podman[246923]: 2026-02-19 20:24:34.427454839 +0000 UTC m=+0.107820233 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 19 20:24:37 compute-0 podman[246942]: 2026-02-19 20:24:37.43503106 +0000 UTC m=+0.103199088 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, architecture=x86_64, vcs-type=git, version=9.4, config_id=kepler, maintainer=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, name=ubi9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 19 20:24:37 compute-0 podman[246943]: 2026-02-19 20:24:37.45107252 +0000 UTC m=+0.114051747 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Feb 19 20:24:37 compute-0 nova_compute[188777]: 2026-02-19 20:24:37.972 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:24:38 compute-0 nova_compute[188777]: 2026-02-19 20:24:38.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:24:38 compute-0 nova_compute[188777]: 2026-02-19 20:24:38.265 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:24:39 compute-0 nova_compute[188777]: 2026-02-19 20:24:39.213 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:24:40 compute-0 nova_compute[188777]: 2026-02-19 20:24:40.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:24:40 compute-0 nova_compute[188777]: 2026-02-19 20:24:40.265 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 19 20:24:41 compute-0 podman[246980]: 2026-02-19 20:24:41.373482333 +0000 UTC m=+0.057074270 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 19 20:24:42 compute-0 nova_compute[188777]: 2026-02-19 20:24:42.298 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:24:42 compute-0 nova_compute[188777]: 2026-02-19 20:24:42.973 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:24:43 compute-0 nova_compute[188777]: 2026-02-19 20:24:43.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:24:43 compute-0 sshd-session[247001]: Invalid user oracle from 154.12.80.151 port 39068
Feb 19 20:24:43 compute-0 podman[247003]: 2026-02-19 20:24:43.675819941 +0000 UTC m=+0.095323082 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:24:43 compute-0 sshd-session[247001]: Received disconnect from 154.12.80.151 port 39068:11: Bye Bye [preauth]
Feb 19 20:24:43 compute-0 sshd-session[247001]: Disconnected from invalid user oracle 154.12.80.151 port 39068 [preauth]
Feb 19 20:24:44 compute-0 nova_compute[188777]: 2026-02-19 20:24:44.215 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:24:44 compute-0 nova_compute[188777]: 2026-02-19 20:24:44.269 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:24:45 compute-0 nova_compute[188777]: 2026-02-19 20:24:45.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:24:45 compute-0 nova_compute[188777]: 2026-02-19 20:24:45.266 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:24:46 compute-0 nova_compute[188777]: 2026-02-19 20:24:46.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:24:46 compute-0 nova_compute[188777]: 2026-02-19 20:24:46.294 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:24:46 compute-0 nova_compute[188777]: 2026-02-19 20:24:46.294 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:24:46 compute-0 nova_compute[188777]: 2026-02-19 20:24:46.295 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:24:46 compute-0 nova_compute[188777]: 2026-02-19 20:24:46.295 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:24:46 compute-0 nova_compute[188777]: 2026-02-19 20:24:46.394 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:24:46 compute-0 podman[247024]: 2026-02-19 20:24:46.470990523 +0000 UTC m=+0.149010886 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 20:24:46 compute-0 nova_compute[188777]: 2026-02-19 20:24:46.474 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:24:46 compute-0 nova_compute[188777]: 2026-02-19 20:24:46.475 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:24:46 compute-0 nova_compute[188777]: 2026-02-19 20:24:46.545 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:24:46 compute-0 nova_compute[188777]: 2026-02-19 20:24:46.547 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:24:46 compute-0 nova_compute[188777]: 2026-02-19 20:24:46.599 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:24:46 compute-0 nova_compute[188777]: 2026-02-19 20:24:46.600 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:24:46 compute-0 nova_compute[188777]: 2026-02-19 20:24:46.650 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:24:46 compute-0 nova_compute[188777]: 2026-02-19 20:24:46.663 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:24:46 compute-0 nova_compute[188777]: 2026-02-19 20:24:46.716 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:24:46 compute-0 nova_compute[188777]: 2026-02-19 20:24:46.718 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:24:46 compute-0 nova_compute[188777]: 2026-02-19 20:24:46.801 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:24:46 compute-0 nova_compute[188777]: 2026-02-19 20:24:46.802 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:24:46 compute-0 nova_compute[188777]: 2026-02-19 20:24:46.854 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.eph0 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:24:46 compute-0 nova_compute[188777]: 2026-02-19 20:24:46.855 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:24:46 compute-0 nova_compute[188777]: 2026-02-19 20:24:46.918 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:24:46 compute-0 nova_compute[188777]: 2026-02-19 20:24:46.925 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:24:46 compute-0 nova_compute[188777]: 2026-02-19 20:24:46.970 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk --force-share --output=json" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:24:46 compute-0 nova_compute[188777]: 2026-02-19 20:24:46.971 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:24:47 compute-0 nova_compute[188777]: 2026-02-19 20:24:47.021 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:24:47 compute-0 nova_compute[188777]: 2026-02-19 20:24:47.022 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:24:47 compute-0 nova_compute[188777]: 2026-02-19 20:24:47.068 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0 --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:24:47 compute-0 nova_compute[188777]: 2026-02-19 20:24:47.069 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:24:47 compute-0 nova_compute[188777]: 2026-02-19 20:24:47.126 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:24:47 compute-0 nova_compute[188777]: 2026-02-19 20:24:47.535 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:24:47 compute-0 nova_compute[188777]: 2026-02-19 20:24:47.536 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4759MB free_disk=72.20394515991211GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:24:47 compute-0 nova_compute[188777]: 2026-02-19 20:24:47.537 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:24:47 compute-0 nova_compute[188777]: 2026-02-19 20:24:47.537 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:24:47 compute-0 nova_compute[188777]: 2026-02-19 20:24:47.880 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 5aaac42d-946d-4c6f-9bde-23b8b6613b59 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:24:47 compute-0 nova_compute[188777]: 2026-02-19 20:24:47.881 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 14ed9fe0-b150-4bd8-852e-7f2f62d4374b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:24:47 compute-0 nova_compute[188777]: 2026-02-19 20:24:47.882 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 1cda3ab8-0805-4bcd-955c-996994fd3cb4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:24:47 compute-0 nova_compute[188777]: 2026-02-19 20:24:47.882 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:24:47 compute-0 nova_compute[188777]: 2026-02-19 20:24:47.883 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:24:47 compute-0 nova_compute[188777]: 2026-02-19 20:24:47.976 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:24:48 compute-0 nova_compute[188777]: 2026-02-19 20:24:48.195 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:24:48 compute-0 nova_compute[188777]: 2026-02-19 20:24:48.243 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:24:48 compute-0 nova_compute[188777]: 2026-02-19 20:24:48.314 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:24:48 compute-0 nova_compute[188777]: 2026-02-19 20:24:48.315 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.777s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:24:49 compute-0 nova_compute[188777]: 2026-02-19 20:24:49.219 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:24:49 compute-0 nova_compute[188777]: 2026-02-19 20:24:49.317 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:24:49 compute-0 nova_compute[188777]: 2026-02-19 20:24:49.353 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:24:49 compute-0 nova_compute[188777]: 2026-02-19 20:24:49.354 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:24:49 compute-0 nova_compute[188777]: 2026-02-19 20:24:49.601 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "refresh_cache-1cda3ab8-0805-4bcd-955c-996994fd3cb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:24:49 compute-0 nova_compute[188777]: 2026-02-19 20:24:49.602 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquired lock "refresh_cache-1cda3ab8-0805-4bcd-955c-996994fd3cb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:24:49 compute-0 nova_compute[188777]: 2026-02-19 20:24:49.602 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 19 20:24:50 compute-0 nova_compute[188777]: 2026-02-19 20:24:50.711 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Updating instance_info_cache with network_info: [{"id": "bbe0af68-c9d2-4b14-854b-b5355d9ef899", "address": "fa:16:3e:2c:50:54", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.76", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbe0af68-c9", "ovs_interfaceid": "bbe0af68-c9d2-4b14-854b-b5355d9ef899", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:24:50 compute-0 nova_compute[188777]: 2026-02-19 20:24:50.730 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Releasing lock "refresh_cache-1cda3ab8-0805-4bcd-955c-996994fd3cb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:24:50 compute-0 nova_compute[188777]: 2026-02-19 20:24:50.730 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 19 20:24:50 compute-0 nova_compute[188777]: 2026-02-19 20:24:50.731 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:24:50 compute-0 nova_compute[188777]: 2026-02-19 20:24:50.731 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:24:52 compute-0 nova_compute[188777]: 2026-02-19 20:24:52.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:24:52 compute-0 nova_compute[188777]: 2026-02-19 20:24:52.265 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 19 20:24:52 compute-0 nova_compute[188777]: 2026-02-19 20:24:52.282 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 19 20:24:52 compute-0 ovn_controller[98843]: 2026-02-19T20:24:52Z|00053|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Feb 19 20:24:52 compute-0 nova_compute[188777]: 2026-02-19 20:24:52.979 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:24:54 compute-0 nova_compute[188777]: 2026-02-19 20:24:54.223 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:24:57 compute-0 nova_compute[188777]: 2026-02-19 20:24:57.982 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:24:59 compute-0 nova_compute[188777]: 2026-02-19 20:24:59.226 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:24:59 compute-0 podman[204724]: time="2026-02-19T20:24:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:24:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:24:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 19 20:24:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:24:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4377 "" "Go-http-client/1.1"
Feb 19 20:25:00 compute-0 podman[247089]: 2026-02-19 20:25:00.408493062 +0000 UTC m=+0.084719184 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, build-date=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, name=ubi9/ubi-minimal, architecture=x86_64, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Feb 19 20:25:00 compute-0 podman[247090]: 2026-02-19 20:25:00.431245959 +0000 UTC m=+0.101457385 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 19 20:25:01 compute-0 openstack_network_exporter[207898]: ERROR   20:25:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:25:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:25:01 compute-0 openstack_network_exporter[207898]: ERROR   20:25:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:25:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:25:02 compute-0 nova_compute[188777]: 2026-02-19 20:25:02.986 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:25:04 compute-0 nova_compute[188777]: 2026-02-19 20:25:04.228 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:25:05 compute-0 podman[247134]: 2026-02-19 20:25:05.396340829 +0000 UTC m=+0.075945882 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Feb 19 20:25:07 compute-0 nova_compute[188777]: 2026-02-19 20:25:07.988 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:25:08 compute-0 podman[247154]: 2026-02-19 20:25:08.409480151 +0000 UTC m=+0.091221586 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Feb 19 20:25:08 compute-0 podman[247153]: 2026-02-19 20:25:08.423327063 +0000 UTC m=+0.106024728 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.openshift.expose-services=, io.openshift.tags=base rhel9, architecture=x86_64, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=kepler, container_name=kepler, io.buildah.version=1.29.0, vcs-type=git, managed_by=edpm_ansible)
Feb 19 20:25:09 compute-0 nova_compute[188777]: 2026-02-19 20:25:09.230 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:25:12 compute-0 podman[247193]: 2026-02-19 20:25:12.415644595 +0000 UTC m=+0.099929498 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 19 20:25:12 compute-0 nova_compute[188777]: 2026-02-19 20:25:12.990 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:25:14 compute-0 nova_compute[188777]: 2026-02-19 20:25:14.233 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:25:14 compute-0 podman[247220]: 2026-02-19 20:25:14.420259464 +0000 UTC m=+0.096914483 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0)
Feb 19 20:25:14 compute-0 sshd-session[247218]: Invalid user papry from 160.187.147.124 port 43836
Feb 19 20:25:15 compute-0 sshd-session[247218]: Received disconnect from 160.187.147.124 port 43836:11: Bye Bye [preauth]
Feb 19 20:25:15 compute-0 sshd-session[247218]: Disconnected from invalid user papry 160.187.147.124 port 43836 [preauth]
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.141 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.142 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.142 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f51eb0b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.143 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fa4f6728800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.144 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f51eb0b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.144 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f51eb0b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.145 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f51eb0b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.145 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f51eb0b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.145 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f51eb0b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.145 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f51eb0b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.146 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f51eb0b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.146 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f51eb0b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.146 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f51eb0b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.146 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a390>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f51eb0b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.147 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f51eb0b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.147 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f51eb0b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.147 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f51eb0b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.147 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f51eb0b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.147 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f51eb0b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.148 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f51eb0b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.148 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f51eb0b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.148 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f51eb0b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.149 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f51eb0b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.149 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f51eb0b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.149 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f51eb0b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.150 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f51eb0b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.150 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f51eb0b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.150 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f51eb0b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.151 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f51eb0b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.152 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '5aaac42d-946d-4c6f-9bde-23b8b6613b59', 'name': 'test_0', 'flavor': {'id': '8030bc1a-9afb-4678-ac07-8b59a1275925', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e1a79c75-2fa3-410d-9c4c-91db3eeca51d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '59f01dee51a74ac1a9f82733f591827d', 'user_id': '9f5597a45dc34ee19bcfe938afde768f', 'hostId': 'fd9f80e206ee2256ddb900effab6d3e51f96886da6d1a8f886ddbab7', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.157 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '14ed9fe0-b150-4bd8-852e-7f2f62d4374b', 'name': 'vn-h4amqsx-zdyrztqs2ra5-eeiurm4z7i6z-vnf-hs7qdifsqkdp', 'flavor': {'id': '8030bc1a-9afb-4678-ac07-8b59a1275925', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e1a79c75-2fa3-410d-9c4c-91db3eeca51d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '59f01dee51a74ac1a9f82733f591827d', 'user_id': '9f5597a45dc34ee19bcfe938afde768f', 'hostId': 'fd9f80e206ee2256ddb900effab6d3e51f96886da6d1a8f886ddbab7', 'status': 'active', 'metadata': {'metering.server_group': '78adc0ea-8772-4283-8bd6-6dbdcecee09e'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.162 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1cda3ab8-0805-4bcd-955c-996994fd3cb4', 'name': 'vn-h4amqsx-jiq3zjubtpvr-5uw2ts4vboyi-vnf-jucboitrw5qp', 'flavor': {'id': '8030bc1a-9afb-4678-ac07-8b59a1275925', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e1a79c75-2fa3-410d-9c4c-91db3eeca51d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '59f01dee51a74ac1a9f82733f591827d', 'user_id': '9f5597a45dc34ee19bcfe938afde768f', 'hostId': 'fd9f80e206ee2256ddb900effab6d3e51f96886da6d1a8f886ddbab7', 'status': 'active', 'metadata': {'metering.server_group': '78adc0ea-8772-4283-8bd6-6dbdcecee09e'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.163 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.163 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.163 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.164 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.165 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-19T20:25:15.164111) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.171 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.177 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.183 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.184 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.184 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fa4f672a480>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.184 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.185 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fa4f672a180>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.185 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.185 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.185 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.186 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.186 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.186 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-19T20:25:15.185934) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.187 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.187 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.188 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.188 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fa4f672bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.188 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.189 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.189 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.189 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.189 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-19T20:25:15.189452) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.189 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.190 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.190 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.191 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.191 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fa4f672a270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.191 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.192 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.192 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.192 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.192 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.193 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.193 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.outgoing.bytes volume: 2328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.194 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.194 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fa4f6728ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.194 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.195 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.195 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.195 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.196 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-19T20:25:15.192357) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.196 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-19T20:25:15.195569) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.225 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.251 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.278 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.279 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.279 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fa4f672a300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.279 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.280 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.280 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.280 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.280 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.281 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.281 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-19T20:25:15.280505) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.282 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.282 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.282 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fa4f672ab70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.283 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.283 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.283 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.283 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.283 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-19T20:25:15.283322) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.307 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.307 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.308 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.334 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.335 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.335 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.359 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.359 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.360 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.360 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.361 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fa4f6728290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.361 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.361 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.361 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.361 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.362 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-19T20:25:15.361695) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.432 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.433 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.433 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.549 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.550 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.551 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.638 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.638 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.639 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.639 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.640 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fa4f69216a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.640 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.640 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.640 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.640 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.641 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/cpu volume: 40970000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.641 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-19T20:25:15.640578) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.641 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/cpu volume: 36220000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.641 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/cpu volume: 37060000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.642 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.642 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fa4f67286b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.642 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.642 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fa4f67283b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.642 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.643 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.643 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.643 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.644 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.latency volume: 658474829 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.644 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.latency volume: 116712843 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.644 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.latency volume: 151528840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.645 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.read.latency volume: 786473372 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.645 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.read.latency volume: 127444335 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.645 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-19T20:25:15.643381) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.645 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.read.latency volume: 200419857 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.646 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.latency volume: 683601533 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.646 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.latency volume: 109290795 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.646 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.latency volume: 110141141 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.647 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.647 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fa4f672a120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.648 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.648 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.648 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.649 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.649 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.650 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.650 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.651 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-19T20:25:15.648400) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.655 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.655 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fa4f672a1b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.655 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.655 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.655 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.656 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.656 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.657 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.657 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.658 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.657 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-19T20:25:15.656015) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.658 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fa4f6728410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.658 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.658 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.659 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.659 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.659 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.660 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.660 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.660 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.659 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-19T20:25:15.659104) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.661 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.661 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.661 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.662 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.662 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.663 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.663 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fa4f672a150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.663 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.663 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.664 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.664 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.664 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.665 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.665 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.665 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.664 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-19T20:25:15.664107) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.666 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fa4f6728470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.666 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.666 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.666 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.666 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.667 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.667 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.668 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.667 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-19T20:25:15.666656) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.668 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.668 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.669 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.669 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.669 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.670 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.670 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.671 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fa4f68f6030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.671 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.671 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.671 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.671 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.672 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.672 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.672 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.673 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.673 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.672 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-19T20:25:15.671528) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.673 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.674 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.674 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.674 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.675 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.675 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fa4f672ab10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.675 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.675 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.675 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.676 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.676 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.676 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-19T20:25:15.675985) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.676 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.677 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.677 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.677 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.678 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.678 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.678 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.679 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.679 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.679 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fa4f6728500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.680 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.680 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.680 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.680 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.681 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.latency volume: 2413036213 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.681 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-19T20:25:15.680526) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.681 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.latency volume: 10941917 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.681 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.682 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.write.latency volume: 1278193356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.682 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.write.latency volume: 16674926 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.682 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.683 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.latency volume: 1916273341 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.683 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.latency volume: 10533639 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.683 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.685 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.685 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fa4f672a0c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.685 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.686 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.686 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.686 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.687 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-19T20:25:15.686552) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.687 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.688 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.688 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.689 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.690 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fa4f6728560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.690 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.691 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.691 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.692 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.692 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.692 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-19T20:25:15.691858) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.693 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.694 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.694 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.695 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.695 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.696 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.696 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.697 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.698 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.698 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fa4f67285c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.699 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.699 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.699 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.700 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.701 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-19T20:25:15.699978) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.702 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.702 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fa4f6728620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.702 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.702 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.702 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.703 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.704 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-19T20:25:15.702946) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.704 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.705 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fa4f672be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.705 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.705 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.705 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.706 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.707 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/memory.usage volume: 48.734375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.707 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-19T20:25:15.706055) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.707 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/memory.usage volume: 48.95703125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.708 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/memory.usage volume: 49.046875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.708 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.708 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fa4f672be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.709 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.709 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.709 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.709 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.710 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.bytes volume: 2220 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.710 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-19T20:25:15.709347) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.711 15 DEBUG ceilometer.compute.pollsters [-] 14ed9fe0-b150-4bd8-852e-7f2f62d4374b/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.712 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.incoming.bytes volume: 1612 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.712 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.713 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.714 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.714 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.714 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.715 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.715 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.715 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.715 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.715 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.715 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.716 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.716 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.716 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.716 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.716 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.716 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.716 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.717 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.717 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.717 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.717 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.717 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.717 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.718 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.718 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:25:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:25:15.718 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:25:17 compute-0 podman[247243]: 2026-02-19 20:25:17.482876835 +0000 UTC m=+0.157443895 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 19 20:25:17 compute-0 sshd-session[247241]: Invalid user httpd from 103.179.56.24 port 36736
Feb 19 20:25:17 compute-0 nova_compute[188777]: 2026-02-19 20:25:17.993 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:25:18 compute-0 sshd-session[247241]: Received disconnect from 103.179.56.24 port 36736:11: Bye Bye [preauth]
Feb 19 20:25:18 compute-0 sshd-session[247241]: Disconnected from invalid user httpd 103.179.56.24 port 36736 [preauth]
Feb 19 20:25:19 compute-0 nova_compute[188777]: 2026-02-19 20:25:19.235 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:25:20 compute-0 nova_compute[188777]: 2026-02-19 20:25:20.294 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:25:20 compute-0 nova_compute[188777]: 2026-02-19 20:25:20.337 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Triggering sync for uuid 5aaac42d-946d-4c6f-9bde-23b8b6613b59 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 19 20:25:20 compute-0 nova_compute[188777]: 2026-02-19 20:25:20.338 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Triggering sync for uuid 14ed9fe0-b150-4bd8-852e-7f2f62d4374b _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 19 20:25:20 compute-0 nova_compute[188777]: 2026-02-19 20:25:20.339 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Triggering sync for uuid 1cda3ab8-0805-4bcd-955c-996994fd3cb4 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 19 20:25:20 compute-0 nova_compute[188777]: 2026-02-19 20:25:20.339 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "5aaac42d-946d-4c6f-9bde-23b8b6613b59" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:25:20 compute-0 nova_compute[188777]: 2026-02-19 20:25:20.340 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "5aaac42d-946d-4c6f-9bde-23b8b6613b59" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:25:20 compute-0 nova_compute[188777]: 2026-02-19 20:25:20.340 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "14ed9fe0-b150-4bd8-852e-7f2f62d4374b" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:25:20 compute-0 nova_compute[188777]: 2026-02-19 20:25:20.340 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "14ed9fe0-b150-4bd8-852e-7f2f62d4374b" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:25:20 compute-0 nova_compute[188777]: 2026-02-19 20:25:20.341 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "1cda3ab8-0805-4bcd-955c-996994fd3cb4" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:25:20 compute-0 nova_compute[188777]: 2026-02-19 20:25:20.341 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "1cda3ab8-0805-4bcd-955c-996994fd3cb4" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:25:20 compute-0 nova_compute[188777]: 2026-02-19 20:25:20.378 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "5aaac42d-946d-4c6f-9bde-23b8b6613b59" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:25:20 compute-0 nova_compute[188777]: 2026-02-19 20:25:20.380 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "14ed9fe0-b150-4bd8-852e-7f2f62d4374b" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.039s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:25:20 compute-0 nova_compute[188777]: 2026-02-19 20:25:20.382 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "1cda3ab8-0805-4bcd-955c-996994fd3cb4" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.041s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:25:22 compute-0 nova_compute[188777]: 2026-02-19 20:25:22.996 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:25:24 compute-0 nova_compute[188777]: 2026-02-19 20:25:24.239 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:25:28 compute-0 nova_compute[188777]: 2026-02-19 20:25:28.000 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:25:29 compute-0 nova_compute[188777]: 2026-02-19 20:25:29.241 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:25:29 compute-0 podman[204724]: time="2026-02-19T20:25:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:25:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:25:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 19 20:25:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:25:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4373 "" "Go-http-client/1.1"
Feb 19 20:25:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:25:30.436 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:25:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:25:30.437 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:25:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:25:30.439 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:25:31 compute-0 podman[247269]: 2026-02-19 20:25:31.388288959 +0000 UTC m=+0.070359547 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=openstack_network_exporter, managed_by=edpm_ansible, release=1770267347, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, build-date=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.7, com.redhat.component=ubi9-minimal-container, distribution-scope=public, architecture=x86_64)
Feb 19 20:25:31 compute-0 podman[247270]: 2026-02-19 20:25:31.399809598 +0000 UTC m=+0.076045985 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 19 20:25:31 compute-0 openstack_network_exporter[207898]: ERROR   20:25:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:25:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:25:31 compute-0 openstack_network_exporter[207898]: ERROR   20:25:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:25:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:25:33 compute-0 nova_compute[188777]: 2026-02-19 20:25:33.002 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:25:34 compute-0 nova_compute[188777]: 2026-02-19 20:25:34.243 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:25:36 compute-0 podman[247316]: 2026-02-19 20:25:36.437844498 +0000 UTC m=+0.118490955 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Feb 19 20:25:38 compute-0 nova_compute[188777]: 2026-02-19 20:25:38.005 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:25:39 compute-0 nova_compute[188777]: 2026-02-19 20:25:39.246 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:25:39 compute-0 nova_compute[188777]: 2026-02-19 20:25:39.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:25:39 compute-0 nova_compute[188777]: 2026-02-19 20:25:39.265 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:25:39 compute-0 podman[247336]: 2026-02-19 20:25:39.429138913 +0000 UTC m=+0.107325287 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, name=ubi9, version=9.4, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1214.1726694543, container_name=kepler, release-0.7.12=, com.redhat.component=ubi9-container, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, maintainer=Red Hat, Inc.)
Feb 19 20:25:39 compute-0 podman[247337]: 2026-02-19 20:25:39.434141619 +0000 UTC m=+0.105265304 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_ipmi, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Feb 19 20:25:43 compute-0 nova_compute[188777]: 2026-02-19 20:25:43.007 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:25:43 compute-0 sshd-session[247315]: error: kex_exchange_identification: read: Connection timed out
Feb 19 20:25:43 compute-0 sshd-session[247315]: banner exchange: Connection from 125.94.106.195 port 46278: Connection timed out
Feb 19 20:25:43 compute-0 podman[247373]: 2026-02-19 20:25:43.415452026 +0000 UTC m=+0.090618278 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 19 20:25:44 compute-0 nova_compute[188777]: 2026-02-19 20:25:44.249 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:25:44 compute-0 nova_compute[188777]: 2026-02-19 20:25:44.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:25:44 compute-0 podman[247397]: 2026-02-19 20:25:44.818757122 +0000 UTC m=+0.123800400 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260216, tcib_managed=true)
Feb 19 20:25:45 compute-0 nova_compute[188777]: 2026-02-19 20:25:45.259 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:25:45 compute-0 nova_compute[188777]: 2026-02-19 20:25:45.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:25:46 compute-0 nova_compute[188777]: 2026-02-19 20:25:46.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:25:46 compute-0 nova_compute[188777]: 2026-02-19 20:25:46.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:25:46 compute-0 nova_compute[188777]: 2026-02-19 20:25:46.300 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:25:46 compute-0 nova_compute[188777]: 2026-02-19 20:25:46.301 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:25:46 compute-0 nova_compute[188777]: 2026-02-19 20:25:46.301 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:25:46 compute-0 nova_compute[188777]: 2026-02-19 20:25:46.302 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:25:46 compute-0 nova_compute[188777]: 2026-02-19 20:25:46.396 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:25:46 compute-0 nova_compute[188777]: 2026-02-19 20:25:46.497 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:25:46 compute-0 nova_compute[188777]: 2026-02-19 20:25:46.498 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:25:46 compute-0 nova_compute[188777]: 2026-02-19 20:25:46.549 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:25:46 compute-0 nova_compute[188777]: 2026-02-19 20:25:46.550 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:25:46 compute-0 nova_compute[188777]: 2026-02-19 20:25:46.612 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:25:46 compute-0 nova_compute[188777]: 2026-02-19 20:25:46.614 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:25:46 compute-0 nova_compute[188777]: 2026-02-19 20:25:46.675 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:25:46 compute-0 nova_compute[188777]: 2026-02-19 20:25:46.683 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:25:46 compute-0 nova_compute[188777]: 2026-02-19 20:25:46.733 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:25:46 compute-0 nova_compute[188777]: 2026-02-19 20:25:46.734 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:25:46 compute-0 nova_compute[188777]: 2026-02-19 20:25:46.812 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:25:46 compute-0 nova_compute[188777]: 2026-02-19 20:25:46.813 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:25:46 compute-0 nova_compute[188777]: 2026-02-19 20:25:46.863 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.eph0 --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:25:46 compute-0 nova_compute[188777]: 2026-02-19 20:25:46.864 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:25:46 compute-0 nova_compute[188777]: 2026-02-19 20:25:46.917 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b/disk.eph0 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:25:46 compute-0 nova_compute[188777]: 2026-02-19 20:25:46.926 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:25:46 compute-0 nova_compute[188777]: 2026-02-19 20:25:46.976 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:25:46 compute-0 nova_compute[188777]: 2026-02-19 20:25:46.977 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:25:47 compute-0 nova_compute[188777]: 2026-02-19 20:25:47.058 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:25:47 compute-0 nova_compute[188777]: 2026-02-19 20:25:47.058 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:25:47 compute-0 nova_compute[188777]: 2026-02-19 20:25:47.139 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:25:47 compute-0 nova_compute[188777]: 2026-02-19 20:25:47.141 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:25:47 compute-0 nova_compute[188777]: 2026-02-19 20:25:47.222 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:25:47 compute-0 nova_compute[188777]: 2026-02-19 20:25:47.700 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:25:47 compute-0 nova_compute[188777]: 2026-02-19 20:25:47.702 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4768MB free_disk=72.20396423339844GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:25:47 compute-0 nova_compute[188777]: 2026-02-19 20:25:47.703 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:25:47 compute-0 nova_compute[188777]: 2026-02-19 20:25:47.703 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:25:47 compute-0 nova_compute[188777]: 2026-02-19 20:25:47.785 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 5aaac42d-946d-4c6f-9bde-23b8b6613b59 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:25:47 compute-0 nova_compute[188777]: 2026-02-19 20:25:47.786 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 14ed9fe0-b150-4bd8-852e-7f2f62d4374b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:25:47 compute-0 nova_compute[188777]: 2026-02-19 20:25:47.786 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 1cda3ab8-0805-4bcd-955c-996994fd3cb4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:25:47 compute-0 nova_compute[188777]: 2026-02-19 20:25:47.786 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:25:47 compute-0 nova_compute[188777]: 2026-02-19 20:25:47.786 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:25:47 compute-0 nova_compute[188777]: 2026-02-19 20:25:47.855 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:25:47 compute-0 nova_compute[188777]: 2026-02-19 20:25:47.868 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:25:47 compute-0 nova_compute[188777]: 2026-02-19 20:25:47.869 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:25:47 compute-0 nova_compute[188777]: 2026-02-19 20:25:47.870 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:25:48 compute-0 nova_compute[188777]: 2026-02-19 20:25:48.009 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:25:48 compute-0 podman[247452]: 2026-02-19 20:25:48.489739277 +0000 UTC m=+0.158450828 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 19 20:25:49 compute-0 nova_compute[188777]: 2026-02-19 20:25:49.253 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:25:49 compute-0 nova_compute[188777]: 2026-02-19 20:25:49.870 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:25:49 compute-0 nova_compute[188777]: 2026-02-19 20:25:49.870 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:25:49 compute-0 nova_compute[188777]: 2026-02-19 20:25:49.870 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 19 20:25:50 compute-0 nova_compute[188777]: 2026-02-19 20:25:50.691 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "refresh_cache-5aaac42d-946d-4c6f-9bde-23b8b6613b59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:25:50 compute-0 nova_compute[188777]: 2026-02-19 20:25:50.692 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquired lock "refresh_cache-5aaac42d-946d-4c6f-9bde-23b8b6613b59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:25:50 compute-0 nova_compute[188777]: 2026-02-19 20:25:50.692 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 19 20:25:50 compute-0 nova_compute[188777]: 2026-02-19 20:25:50.692 188781 DEBUG nova.objects.instance [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5aaac42d-946d-4c6f-9bde-23b8b6613b59 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:25:52 compute-0 nova_compute[188777]: 2026-02-19 20:25:52.754 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Updating instance_info_cache with network_info: [{"id": "10027d6c-43cc-4a7c-be42-a49c8c914f25", "address": "fa:16:3e:e4:9e:14", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.193", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10027d6c-43", "ovs_interfaceid": "10027d6c-43cc-4a7c-be42-a49c8c914f25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:25:52 compute-0 nova_compute[188777]: 2026-02-19 20:25:52.786 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Releasing lock "refresh_cache-5aaac42d-946d-4c6f-9bde-23b8b6613b59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:25:52 compute-0 nova_compute[188777]: 2026-02-19 20:25:52.787 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 19 20:25:52 compute-0 nova_compute[188777]: 2026-02-19 20:25:52.788 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:25:52 compute-0 nova_compute[188777]: 2026-02-19 20:25:52.788 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:25:53 compute-0 nova_compute[188777]: 2026-02-19 20:25:53.009 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:25:54 compute-0 nova_compute[188777]: 2026-02-19 20:25:54.256 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:25:58 compute-0 nova_compute[188777]: 2026-02-19 20:25:58.012 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:25:59 compute-0 nova_compute[188777]: 2026-02-19 20:25:59.260 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:25:59 compute-0 podman[204724]: time="2026-02-19T20:25:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:25:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:25:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 19 20:25:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:25:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4373 "" "Go-http-client/1.1"
Feb 19 20:26:01 compute-0 openstack_network_exporter[207898]: ERROR   20:26:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:26:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:26:01 compute-0 openstack_network_exporter[207898]: ERROR   20:26:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:26:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:26:02 compute-0 podman[247481]: 2026-02-19 20:26:02.393933707 +0000 UTC m=+0.075905341 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 19 20:26:02 compute-0 podman[247480]: 2026-02-19 20:26:02.394116333 +0000 UTC m=+0.076588282 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, container_name=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 19 20:26:03 compute-0 nova_compute[188777]: 2026-02-19 20:26:03.015 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:26:04 compute-0 nova_compute[188777]: 2026-02-19 20:26:04.264 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:26:07 compute-0 podman[247521]: 2026-02-19 20:26:07.420747125 +0000 UTC m=+0.096621115 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Feb 19 20:26:08 compute-0 nova_compute[188777]: 2026-02-19 20:26:08.018 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:26:09 compute-0 nova_compute[188777]: 2026-02-19 20:26:09.267 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:26:10 compute-0 podman[247541]: 2026-02-19 20:26:10.434736668 +0000 UTC m=+0.114079487 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 19 20:26:10 compute-0 podman[247540]: 2026-02-19 20:26:10.436204354 +0000 UTC m=+0.115641496 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, config_id=kepler, io.openshift.tags=base rhel9, distribution-scope=public, name=ubi9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9)
Feb 19 20:26:13 compute-0 nova_compute[188777]: 2026-02-19 20:26:13.020 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:26:14 compute-0 nova_compute[188777]: 2026-02-19 20:26:14.270 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:26:14 compute-0 podman[247577]: 2026-02-19 20:26:14.410241067 +0000 UTC m=+0.085223609 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 19 20:26:15 compute-0 podman[247601]: 2026-02-19 20:26:15.377328543 +0000 UTC m=+0.068702807 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 19 20:26:17 compute-0 nova_compute[188777]: 2026-02-19 20:26:17.942 188781 DEBUG oslo_concurrency.lockutils [None req-35e446ee-caf0-426c-8468-fad6d0853498 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "14ed9fe0-b150-4bd8-852e-7f2f62d4374b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:26:17 compute-0 nova_compute[188777]: 2026-02-19 20:26:17.943 188781 DEBUG oslo_concurrency.lockutils [None req-35e446ee-caf0-426c-8468-fad6d0853498 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "14ed9fe0-b150-4bd8-852e-7f2f62d4374b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:26:17 compute-0 nova_compute[188777]: 2026-02-19 20:26:17.944 188781 DEBUG oslo_concurrency.lockutils [None req-35e446ee-caf0-426c-8468-fad6d0853498 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "14ed9fe0-b150-4bd8-852e-7f2f62d4374b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:26:17 compute-0 nova_compute[188777]: 2026-02-19 20:26:17.944 188781 DEBUG oslo_concurrency.lockutils [None req-35e446ee-caf0-426c-8468-fad6d0853498 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "14ed9fe0-b150-4bd8-852e-7f2f62d4374b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:26:17 compute-0 nova_compute[188777]: 2026-02-19 20:26:17.945 188781 DEBUG oslo_concurrency.lockutils [None req-35e446ee-caf0-426c-8468-fad6d0853498 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "14ed9fe0-b150-4bd8-852e-7f2f62d4374b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:26:17 compute-0 nova_compute[188777]: 2026-02-19 20:26:17.947 188781 INFO nova.compute.manager [None req-35e446ee-caf0-426c-8468-fad6d0853498 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Terminating instance
Feb 19 20:26:17 compute-0 nova_compute[188777]: 2026-02-19 20:26:17.949 188781 DEBUG nova.compute.manager [None req-35e446ee-caf0-426c-8468-fad6d0853498 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 19 20:26:17 compute-0 kernel: tap9838caff-8a (unregistering): left promiscuous mode
Feb 19 20:26:18 compute-0 NetworkManager[57033]: <info>  [1771532778.0048] device (tap9838caff-8a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 19 20:26:18 compute-0 nova_compute[188777]: 2026-02-19 20:26:18.019 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:26:18 compute-0 ovn_controller[98843]: 2026-02-19T20:26:18Z|00054|binding|INFO|Releasing lport 9838caff-8a65-491d-8b0d-3fb3d10c299c from this chassis (sb_readonly=0)
Feb 19 20:26:18 compute-0 ovn_controller[98843]: 2026-02-19T20:26:18Z|00055|binding|INFO|Setting lport 9838caff-8a65-491d-8b0d-3fb3d10c299c down in Southbound
Feb 19 20:26:18 compute-0 ovn_controller[98843]: 2026-02-19T20:26:18Z|00056|binding|INFO|Removing iface tap9838caff-8a ovn-installed in OVS
Feb 19 20:26:18 compute-0 nova_compute[188777]: 2026-02-19 20:26:18.024 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:26:18 compute-0 nova_compute[188777]: 2026-02-19 20:26:18.028 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:26:18 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:26:18.031 108175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:8b:13 192.168.0.86'], port_security=['fa:16:3e:9c:8b:13 192.168.0.86'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-5pf5gh4amqsx-zdyrztqs2ra5-eeiurm4z7i6z-port-lze5z7eiiyr5', 'neutron:cidrs': '192.168.0.86/24', 'neutron:device_id': '14ed9fe0-b150-4bd8-852e-7f2f62d4374b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec82c3b7-5389-43ab-a939-ce6cd12f9681', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-5pf5gh4amqsx-zdyrztqs2ra5-eeiurm4z7i6z-port-lze5z7eiiyr5', 'neutron:project_id': '59f01dee51a74ac1a9f82733f591827d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '46d7cf50-a73c-415e-96c4-398ffee7ce2d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.207', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61958255-2fb8-4c55-809a-ee04d4cf034a, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>], logical_port=9838caff-8a65-491d-8b0d-3fb3d10c299c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 19 20:26:18 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:26:18.032 108175 INFO neutron.agent.ovn.metadata.agent [-] Port 9838caff-8a65-491d-8b0d-3fb3d10c299c in datapath ec82c3b7-5389-43ab-a939-ce6cd12f9681 unbound from our chassis
Feb 19 20:26:18 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:26:18.034 108175 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec82c3b7-5389-43ab-a939-ce6cd12f9681
Feb 19 20:26:18 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:26:18.046 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[5d5a78c4-de0b-41ea-bf98-82f611750209]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:26:18 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Feb 19 20:26:18 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 1min 32.873s CPU time.
Feb 19 20:26:18 compute-0 systemd-machined[158158]: Machine qemu-3-instance-00000003 terminated.
Feb 19 20:26:18 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:26:18.075 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[2d386164-aee5-4065-b088-eb5676d84241]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:26:18 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:26:18.080 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[918f98ea-2b78-48ad-8370-afbccc405487]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:26:18 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:26:18.107 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[5df44d0f-89bd-4d18-b113-4b79841327df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:26:18 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:26:18.125 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[d6e73ef1-5340-4717-add6-39c947aefc38]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec82c3b7-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8a:e7:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 14, 'rx_bytes': 658, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 14, 'rx_bytes': 658, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 348344, 'reachable_time': 39446, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247634, 'error': None, 'target': 'ovnmeta-ec82c3b7-5389-43ab-a939-ce6cd12f9681', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:26:18 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:26:18.138 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[b7f240ba-34d9-4961-bc07-0d65bad7cf7d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapec82c3b7-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 348361, 'tstamp': 348361}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 247635, 'error': None, 'target': 'ovnmeta-ec82c3b7-5389-43ab-a939-ce6cd12f9681', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapec82c3b7-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 348365, 'tstamp': 348365}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 247635, 'error': None, 'target': 'ovnmeta-ec82c3b7-5389-43ab-a939-ce6cd12f9681', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:26:18 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:26:18.140 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec82c3b7-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:26:18 compute-0 nova_compute[188777]: 2026-02-19 20:26:18.142 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:26:18 compute-0 nova_compute[188777]: 2026-02-19 20:26:18.148 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:26:18 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:26:18.148 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec82c3b7-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:26:18 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:26:18.149 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 19 20:26:18 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:26:18.150 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec82c3b7-50, col_values=(('external_ids', {'iface-id': 'a1c774de-4b7d-47b5-b88c-3f5d9b5c3dce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:26:18 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:26:18.151 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 19 20:26:18 compute-0 nova_compute[188777]: 2026-02-19 20:26:18.172 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:26:18 compute-0 nova_compute[188777]: 2026-02-19 20:26:18.177 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:26:18 compute-0 nova_compute[188777]: 2026-02-19 20:26:18.221 188781 INFO nova.virt.libvirt.driver [-] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Instance destroyed successfully.
Feb 19 20:26:18 compute-0 nova_compute[188777]: 2026-02-19 20:26:18.221 188781 DEBUG nova.objects.instance [None req-35e446ee-caf0-426c-8468-fad6d0853498 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lazy-loading 'resources' on Instance uuid 14ed9fe0-b150-4bd8-852e-7f2f62d4374b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:26:18 compute-0 nova_compute[188777]: 2026-02-19 20:26:18.235 188781 DEBUG nova.virt.libvirt.vif [None req-35e446ee-caf0-426c-8468-fad6d0853498 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-19T20:18:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-h4amqsx-zdyrztqs2ra5-eeiurm4z7i6z-vnf-hs7qdifsqkdp',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-h4amqsx-zdyrztqs2ra5-eeiurm4z7i6z-vnf-hs7qdifsqkdp',id=3,image_ref='e1a79c75-2fa3-410d-9c4c-91db3eeca51d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-19T20:18:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='78adc0ea-8772-4283-8bd6-6dbdcecee09e'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='59f01dee51a74ac1a9f82733f591827d',ramdisk_id='',reservation_id='r-e6veut89',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='e1a79c75-2fa3-410d-9c4c-91db3eeca51d',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-19T20:18:37Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01Nzg4ODk4MTMyMTczNzU1NDA0PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTU3ODg4OTgxMzIxNzM3NTU0MDQ9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTc4ODg5ODEzMjE3Mzc1NTQwND09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTU3ODg4OTgxMzIxNzM3NTU0MDQ9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01Nzg4ODk4MTMyMTczNzU1NDA0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01Nzg4ODk4MTMyMTczNzU1NDA0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Feb 19 20:26:18 compute-0 nova_compute[188777]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTc4ODg5ODEzMjE3Mzc1NTQwND09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTU3ODg4OTgxMzIxNzM3NTU0MDQ9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01Nzg4ODk4MTMyMTczNzU1NDA0PT0tLQo=',user_id='9f5597a45dc34ee19bcfe938afde768f',uuid=14ed9fe0-b150-4bd8-852e-7f2f62d4374b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9838caff-8a65-491d-8b0d-3fb3d10c299c", "address": "fa:16:3e:9c:8b:13", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.86", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9838caff-8a", "ovs_interfaceid": "9838caff-8a65-491d-8b0d-3fb3d10c299c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 19 20:26:18 compute-0 nova_compute[188777]: 2026-02-19 20:26:18.235 188781 DEBUG nova.network.os_vif_util [None req-35e446ee-caf0-426c-8468-fad6d0853498 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Converting VIF {"id": "9838caff-8a65-491d-8b0d-3fb3d10c299c", "address": "fa:16:3e:9c:8b:13", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.86", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9838caff-8a", "ovs_interfaceid": "9838caff-8a65-491d-8b0d-3fb3d10c299c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 19 20:26:18 compute-0 nova_compute[188777]: 2026-02-19 20:26:18.236 188781 DEBUG nova.network.os_vif_util [None req-35e446ee-caf0-426c-8468-fad6d0853498 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9c:8b:13,bridge_name='br-int',has_traffic_filtering=True,id=9838caff-8a65-491d-8b0d-3fb3d10c299c,network=Network(ec82c3b7-5389-43ab-a939-ce6cd12f9681),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9838caff-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 19 20:26:18 compute-0 nova_compute[188777]: 2026-02-19 20:26:18.236 188781 DEBUG os_vif [None req-35e446ee-caf0-426c-8468-fad6d0853498 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:8b:13,bridge_name='br-int',has_traffic_filtering=True,id=9838caff-8a65-491d-8b0d-3fb3d10c299c,network=Network(ec82c3b7-5389-43ab-a939-ce6cd12f9681),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9838caff-8a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 19 20:26:18 compute-0 nova_compute[188777]: 2026-02-19 20:26:18.238 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:26:18 compute-0 nova_compute[188777]: 2026-02-19 20:26:18.238 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9838caff-8a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:26:18 compute-0 nova_compute[188777]: 2026-02-19 20:26:18.241 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:26:18 compute-0 nova_compute[188777]: 2026-02-19 20:26:18.244 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 19 20:26:18 compute-0 nova_compute[188777]: 2026-02-19 20:26:18.247 188781 INFO os_vif [None req-35e446ee-caf0-426c-8468-fad6d0853498 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:8b:13,bridge_name='br-int',has_traffic_filtering=True,id=9838caff-8a65-491d-8b0d-3fb3d10c299c,network=Network(ec82c3b7-5389-43ab-a939-ce6cd12f9681),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9838caff-8a')
Feb 19 20:26:18 compute-0 nova_compute[188777]: 2026-02-19 20:26:18.247 188781 INFO nova.virt.libvirt.driver [None req-35e446ee-caf0-426c-8468-fad6d0853498 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Deleting instance files /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b_del
Feb 19 20:26:18 compute-0 nova_compute[188777]: 2026-02-19 20:26:18.248 188781 INFO nova.virt.libvirt.driver [None req-35e446ee-caf0-426c-8468-fad6d0853498 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Deletion of /var/lib/nova/instances/14ed9fe0-b150-4bd8-852e-7f2f62d4374b_del complete
Feb 19 20:26:18 compute-0 nova_compute[188777]: 2026-02-19 20:26:18.304 188781 INFO nova.compute.manager [None req-35e446ee-caf0-426c-8468-fad6d0853498 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Took 0.35 seconds to destroy the instance on the hypervisor.
Feb 19 20:26:18 compute-0 nova_compute[188777]: 2026-02-19 20:26:18.304 188781 DEBUG oslo.service.loopingcall [None req-35e446ee-caf0-426c-8468-fad6d0853498 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 19 20:26:18 compute-0 nova_compute[188777]: 2026-02-19 20:26:18.305 188781 DEBUG nova.compute.manager [-] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 19 20:26:18 compute-0 nova_compute[188777]: 2026-02-19 20:26:18.306 188781 DEBUG nova.network.neutron [-] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 19 20:26:18 compute-0 rsyslogd[239379]: message too long (8192) with configured size 8096, begin of message is: 2026-02-19 20:26:18.235 188781 DEBUG nova.virt.libvirt.vif [None req-35e446ee-ca [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 19 20:26:18 compute-0 nova_compute[188777]: 2026-02-19 20:26:18.985 188781 DEBUG nova.compute.manager [req-89653dc7-82c1-43d8-853e-f8b948b5865f req-4fe3d235-52d4-40ed-85bc-1fc1776beab3 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Received event network-vif-unplugged-9838caff-8a65-491d-8b0d-3fb3d10c299c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:26:18 compute-0 nova_compute[188777]: 2026-02-19 20:26:18.986 188781 DEBUG oslo_concurrency.lockutils [req-89653dc7-82c1-43d8-853e-f8b948b5865f req-4fe3d235-52d4-40ed-85bc-1fc1776beab3 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "14ed9fe0-b150-4bd8-852e-7f2f62d4374b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:26:18 compute-0 nova_compute[188777]: 2026-02-19 20:26:18.987 188781 DEBUG oslo_concurrency.lockutils [req-89653dc7-82c1-43d8-853e-f8b948b5865f req-4fe3d235-52d4-40ed-85bc-1fc1776beab3 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "14ed9fe0-b150-4bd8-852e-7f2f62d4374b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:26:18 compute-0 nova_compute[188777]: 2026-02-19 20:26:18.987 188781 DEBUG oslo_concurrency.lockutils [req-89653dc7-82c1-43d8-853e-f8b948b5865f req-4fe3d235-52d4-40ed-85bc-1fc1776beab3 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "14ed9fe0-b150-4bd8-852e-7f2f62d4374b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:26:18 compute-0 nova_compute[188777]: 2026-02-19 20:26:18.987 188781 DEBUG nova.compute.manager [req-89653dc7-82c1-43d8-853e-f8b948b5865f req-4fe3d235-52d4-40ed-85bc-1fc1776beab3 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] No waiting events found dispatching network-vif-unplugged-9838caff-8a65-491d-8b0d-3fb3d10c299c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 19 20:26:18 compute-0 nova_compute[188777]: 2026-02-19 20:26:18.988 188781 DEBUG nova.compute.manager [req-89653dc7-82c1-43d8-853e-f8b948b5865f req-4fe3d235-52d4-40ed-85bc-1fc1776beab3 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Received event network-vif-unplugged-9838caff-8a65-491d-8b0d-3fb3d10c299c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 19 20:26:18 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:26:18.990 108175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1e:ad:15', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:0d:ba:1d:25:53'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 19 20:26:18 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:26:18.991 108175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 19 20:26:18 compute-0 nova_compute[188777]: 2026-02-19 20:26:18.991 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:26:19 compute-0 podman[247657]: 2026-02-19 20:26:19.458337034 +0000 UTC m=+0.137002520 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 19 20:26:20 compute-0 nova_compute[188777]: 2026-02-19 20:26:20.838 188781 DEBUG nova.network.neutron [-] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:26:20 compute-0 nova_compute[188777]: 2026-02-19 20:26:20.856 188781 INFO nova.compute.manager [-] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Took 2.55 seconds to deallocate network for instance.
Feb 19 20:26:20 compute-0 nova_compute[188777]: 2026-02-19 20:26:20.911 188781 DEBUG oslo_concurrency.lockutils [None req-35e446ee-caf0-426c-8468-fad6d0853498 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:26:20 compute-0 nova_compute[188777]: 2026-02-19 20:26:20.912 188781 DEBUG oslo_concurrency.lockutils [None req-35e446ee-caf0-426c-8468-fad6d0853498 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:26:21 compute-0 nova_compute[188777]: 2026-02-19 20:26:21.020 188781 DEBUG nova.compute.provider_tree [None req-35e446ee-caf0-426c-8468-fad6d0853498 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:26:21 compute-0 nova_compute[188777]: 2026-02-19 20:26:21.040 188781 DEBUG nova.scheduler.client.report [None req-35e446ee-caf0-426c-8468-fad6d0853498 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:26:21 compute-0 nova_compute[188777]: 2026-02-19 20:26:21.064 188781 DEBUG oslo_concurrency.lockutils [None req-35e446ee-caf0-426c-8468-fad6d0853498 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:26:21 compute-0 nova_compute[188777]: 2026-02-19 20:26:21.079 188781 DEBUG nova.compute.manager [req-792a19fd-1c5e-414c-a368-5fe144d7ab59 req-cfe5aa74-df07-4585-b464-b16d931d5ef7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Received event network-vif-plugged-9838caff-8a65-491d-8b0d-3fb3d10c299c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:26:21 compute-0 nova_compute[188777]: 2026-02-19 20:26:21.080 188781 DEBUG oslo_concurrency.lockutils [req-792a19fd-1c5e-414c-a368-5fe144d7ab59 req-cfe5aa74-df07-4585-b464-b16d931d5ef7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "14ed9fe0-b150-4bd8-852e-7f2f62d4374b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:26:21 compute-0 nova_compute[188777]: 2026-02-19 20:26:21.080 188781 DEBUG oslo_concurrency.lockutils [req-792a19fd-1c5e-414c-a368-5fe144d7ab59 req-cfe5aa74-df07-4585-b464-b16d931d5ef7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "14ed9fe0-b150-4bd8-852e-7f2f62d4374b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:26:21 compute-0 nova_compute[188777]: 2026-02-19 20:26:21.081 188781 DEBUG oslo_concurrency.lockutils [req-792a19fd-1c5e-414c-a368-5fe144d7ab59 req-cfe5aa74-df07-4585-b464-b16d931d5ef7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "14ed9fe0-b150-4bd8-852e-7f2f62d4374b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:26:21 compute-0 nova_compute[188777]: 2026-02-19 20:26:21.081 188781 DEBUG nova.compute.manager [req-792a19fd-1c5e-414c-a368-5fe144d7ab59 req-cfe5aa74-df07-4585-b464-b16d931d5ef7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] No waiting events found dispatching network-vif-plugged-9838caff-8a65-491d-8b0d-3fb3d10c299c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 19 20:26:21 compute-0 nova_compute[188777]: 2026-02-19 20:26:21.081 188781 WARNING nova.compute.manager [req-792a19fd-1c5e-414c-a368-5fe144d7ab59 req-cfe5aa74-df07-4585-b464-b16d931d5ef7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Received unexpected event network-vif-plugged-9838caff-8a65-491d-8b0d-3fb3d10c299c for instance with vm_state deleted and task_state None.
Feb 19 20:26:21 compute-0 nova_compute[188777]: 2026-02-19 20:26:21.082 188781 DEBUG nova.compute.manager [req-792a19fd-1c5e-414c-a368-5fe144d7ab59 req-cfe5aa74-df07-4585-b464-b16d931d5ef7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Received event network-changed-9838caff-8a65-491d-8b0d-3fb3d10c299c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:26:21 compute-0 nova_compute[188777]: 2026-02-19 20:26:21.082 188781 DEBUG nova.compute.manager [req-792a19fd-1c5e-414c-a368-5fe144d7ab59 req-cfe5aa74-df07-4585-b464-b16d931d5ef7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Refreshing instance network info cache due to event network-changed-9838caff-8a65-491d-8b0d-3fb3d10c299c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 19 20:26:21 compute-0 nova_compute[188777]: 2026-02-19 20:26:21.083 188781 DEBUG oslo_concurrency.lockutils [req-792a19fd-1c5e-414c-a368-5fe144d7ab59 req-cfe5aa74-df07-4585-b464-b16d931d5ef7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "refresh_cache-14ed9fe0-b150-4bd8-852e-7f2f62d4374b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:26:21 compute-0 nova_compute[188777]: 2026-02-19 20:26:21.083 188781 DEBUG oslo_concurrency.lockutils [req-792a19fd-1c5e-414c-a368-5fe144d7ab59 req-cfe5aa74-df07-4585-b464-b16d931d5ef7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquired lock "refresh_cache-14ed9fe0-b150-4bd8-852e-7f2f62d4374b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:26:21 compute-0 nova_compute[188777]: 2026-02-19 20:26:21.083 188781 DEBUG nova.network.neutron [req-792a19fd-1c5e-414c-a368-5fe144d7ab59 req-cfe5aa74-df07-4585-b464-b16d931d5ef7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Refreshing network info cache for port 9838caff-8a65-491d-8b0d-3fb3d10c299c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 19 20:26:21 compute-0 nova_compute[188777]: 2026-02-19 20:26:21.106 188781 INFO nova.scheduler.client.report [None req-35e446ee-caf0-426c-8468-fad6d0853498 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Deleted allocations for instance 14ed9fe0-b150-4bd8-852e-7f2f62d4374b
Feb 19 20:26:21 compute-0 nova_compute[188777]: 2026-02-19 20:26:21.178 188781 DEBUG oslo_concurrency.lockutils [None req-35e446ee-caf0-426c-8468-fad6d0853498 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "14ed9fe0-b150-4bd8-852e-7f2f62d4374b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.234s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:26:21 compute-0 nova_compute[188777]: 2026-02-19 20:26:21.708 188781 DEBUG nova.network.neutron [req-792a19fd-1c5e-414c-a368-5fe144d7ab59 req-cfe5aa74-df07-4585-b464-b16d931d5ef7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 19 20:26:22 compute-0 nova_compute[188777]: 2026-02-19 20:26:22.809 188781 DEBUG nova.network.neutron [req-792a19fd-1c5e-414c-a368-5fe144d7ab59 req-cfe5aa74-df07-4585-b464-b16d931d5ef7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:26:22 compute-0 nova_compute[188777]: 2026-02-19 20:26:22.833 188781 DEBUG oslo_concurrency.lockutils [req-792a19fd-1c5e-414c-a368-5fe144d7ab59 req-cfe5aa74-df07-4585-b464-b16d931d5ef7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Releasing lock "refresh_cache-14ed9fe0-b150-4bd8-852e-7f2f62d4374b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:26:23 compute-0 nova_compute[188777]: 2026-02-19 20:26:23.030 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:26:23 compute-0 nova_compute[188777]: 2026-02-19 20:26:23.240 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:26:23 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:26:23.994 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e2fe6bb6-fad0-4563-8388-215a30f03e3f, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:26:28 compute-0 nova_compute[188777]: 2026-02-19 20:26:28.032 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:26:28 compute-0 nova_compute[188777]: 2026-02-19 20:26:28.242 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:26:29 compute-0 podman[204724]: time="2026-02-19T20:26:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:26:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:26:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 19 20:26:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:26:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4371 "" "Go-http-client/1.1"
Feb 19 20:26:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:26:30.438 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:26:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:26:30.438 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:26:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:26:30.439 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:26:31 compute-0 openstack_network_exporter[207898]: ERROR   20:26:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:26:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:26:31 compute-0 openstack_network_exporter[207898]: ERROR   20:26:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:26:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:26:33 compute-0 nova_compute[188777]: 2026-02-19 20:26:33.034 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:26:33 compute-0 nova_compute[188777]: 2026-02-19 20:26:33.218 188781 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1771532778.2170863, 14ed9fe0-b150-4bd8-852e-7f2f62d4374b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:26:33 compute-0 nova_compute[188777]: 2026-02-19 20:26:33.219 188781 INFO nova.compute.manager [-] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] VM Stopped (Lifecycle Event)
Feb 19 20:26:33 compute-0 nova_compute[188777]: 2026-02-19 20:26:33.241 188781 DEBUG nova.compute.manager [None req-25bfce01-2e63-4e4a-9466-fab801457a63 - - - - - -] [instance: 14ed9fe0-b150-4bd8-852e-7f2f62d4374b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:26:33 compute-0 nova_compute[188777]: 2026-02-19 20:26:33.245 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:26:33 compute-0 podman[247687]: 2026-02-19 20:26:33.400476704 +0000 UTC m=+0.071163473 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 19 20:26:33 compute-0 podman[247686]: 2026-02-19 20:26:33.438109084 +0000 UTC m=+0.113366115 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, release=1770267347, build-date=2026-02-05T04:57:10Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.created=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, name=ubi9/ubi-minimal)
Feb 19 20:26:38 compute-0 nova_compute[188777]: 2026-02-19 20:26:38.037 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:26:38 compute-0 nova_compute[188777]: 2026-02-19 20:26:38.247 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:26:38 compute-0 podman[247730]: 2026-02-19 20:26:38.365844263 +0000 UTC m=+0.047490198 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 19 20:26:40 compute-0 nova_compute[188777]: 2026-02-19 20:26:40.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:26:40 compute-0 nova_compute[188777]: 2026-02-19 20:26:40.264 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:26:41 compute-0 podman[247750]: 2026-02-19 20:26:41.382372205 +0000 UTC m=+0.069392208 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, io.openshift.expose-services=, vcs-type=git, build-date=2024-09-18T21:23:30, distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9, release=1214.1726694543, io.openshift.tags=base rhel9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 19 20:26:41 compute-0 podman[247751]: 2026-02-19 20:26:41.396445823 +0000 UTC m=+0.082873158 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 19 20:26:43 compute-0 nova_compute[188777]: 2026-02-19 20:26:43.040 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:26:43 compute-0 nova_compute[188777]: 2026-02-19 20:26:43.249 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:26:44 compute-0 podman[247790]: 2026-02-19 20:26:44.770590065 +0000 UTC m=+0.083800476 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 19 20:26:45 compute-0 nova_compute[188777]: 2026-02-19 20:26:45.260 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:26:46 compute-0 nova_compute[188777]: 2026-02-19 20:26:46.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:26:46 compute-0 nova_compute[188777]: 2026-02-19 20:26:46.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:26:46 compute-0 nova_compute[188777]: 2026-02-19 20:26:46.309 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:26:46 compute-0 nova_compute[188777]: 2026-02-19 20:26:46.309 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:26:46 compute-0 nova_compute[188777]: 2026-02-19 20:26:46.309 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:26:46 compute-0 nova_compute[188777]: 2026-02-19 20:26:46.309 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:26:46 compute-0 podman[247813]: 2026-02-19 20:26:46.392050443 +0000 UTC m=+0.082458454 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216)
Feb 19 20:26:46 compute-0 nova_compute[188777]: 2026-02-19 20:26:46.425 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:26:46 compute-0 nova_compute[188777]: 2026-02-19 20:26:46.470 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:26:46 compute-0 nova_compute[188777]: 2026-02-19 20:26:46.471 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:26:46 compute-0 nova_compute[188777]: 2026-02-19 20:26:46.516 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:26:46 compute-0 nova_compute[188777]: 2026-02-19 20:26:46.517 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:26:46 compute-0 nova_compute[188777]: 2026-02-19 20:26:46.563 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:26:46 compute-0 nova_compute[188777]: 2026-02-19 20:26:46.564 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:26:46 compute-0 nova_compute[188777]: 2026-02-19 20:26:46.626 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:26:46 compute-0 nova_compute[188777]: 2026-02-19 20:26:46.633 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:26:46 compute-0 nova_compute[188777]: 2026-02-19 20:26:46.688 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:26:46 compute-0 nova_compute[188777]: 2026-02-19 20:26:46.689 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:26:46 compute-0 nova_compute[188777]: 2026-02-19 20:26:46.762 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:26:46 compute-0 nova_compute[188777]: 2026-02-19 20:26:46.763 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:26:46 compute-0 nova_compute[188777]: 2026-02-19 20:26:46.850 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:26:46 compute-0 nova_compute[188777]: 2026-02-19 20:26:46.851 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:26:46 compute-0 nova_compute[188777]: 2026-02-19 20:26:46.914 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:26:47 compute-0 nova_compute[188777]: 2026-02-19 20:26:47.301 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:26:47 compute-0 nova_compute[188777]: 2026-02-19 20:26:47.302 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4934MB free_disk=72.2265739440918GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:26:47 compute-0 nova_compute[188777]: 2026-02-19 20:26:47.302 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:26:47 compute-0 nova_compute[188777]: 2026-02-19 20:26:47.302 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:26:47 compute-0 nova_compute[188777]: 2026-02-19 20:26:47.368 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 5aaac42d-946d-4c6f-9bde-23b8b6613b59 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:26:47 compute-0 nova_compute[188777]: 2026-02-19 20:26:47.368 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 1cda3ab8-0805-4bcd-955c-996994fd3cb4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:26:47 compute-0 nova_compute[188777]: 2026-02-19 20:26:47.368 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:26:47 compute-0 nova_compute[188777]: 2026-02-19 20:26:47.369 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:26:47 compute-0 nova_compute[188777]: 2026-02-19 20:26:47.426 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:26:47 compute-0 nova_compute[188777]: 2026-02-19 20:26:47.438 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:26:47 compute-0 nova_compute[188777]: 2026-02-19 20:26:47.461 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:26:47 compute-0 nova_compute[188777]: 2026-02-19 20:26:47.461 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:26:47 compute-0 sshd-session[247859]: Accepted publickey for zuul from 38.102.83.176 port 45202 ssh2: RSA SHA256:Tz8+J60H2NvCUNbrBLaXS+pTxQ8qAPOs7gJ/OpaGYjQ
Feb 19 20:26:47 compute-0 systemd-logind[810]: New session 29 of user zuul.
Feb 19 20:26:47 compute-0 systemd[1]: Started Session 29 of User zuul.
Feb 19 20:26:47 compute-0 sshd-session[247859]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 19 20:26:48 compute-0 nova_compute[188777]: 2026-02-19 20:26:48.042 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:26:48 compute-0 nova_compute[188777]: 2026-02-19 20:26:48.252 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:26:48 compute-0 nova_compute[188777]: 2026-02-19 20:26:48.460 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:26:48 compute-0 nova_compute[188777]: 2026-02-19 20:26:48.461 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:26:48 compute-0 sudo[248036]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frkxjckkyshnnedcidkzirgeiqintcka ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771532808.072083-59072-133212484281508/AnsiballZ_command.py'
Feb 19 20:26:48 compute-0 sudo[248036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:26:48 compute-0 nova_compute[188777]: 2026-02-19 20:26:48.706 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "refresh_cache-1cda3ab8-0805-4bcd-955c-996994fd3cb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:26:48 compute-0 nova_compute[188777]: 2026-02-19 20:26:48.707 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquired lock "refresh_cache-1cda3ab8-0805-4bcd-955c-996994fd3cb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:26:48 compute-0 nova_compute[188777]: 2026-02-19 20:26:48.707 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 19 20:26:48 compute-0 python3[248039]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 20:26:48 compute-0 sudo[248036]: pam_unix(sudo:session): session closed for user root
Feb 19 20:26:49 compute-0 nova_compute[188777]: 2026-02-19 20:26:49.654 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Updating instance_info_cache with network_info: [{"id": "bbe0af68-c9d2-4b14-854b-b5355d9ef899", "address": "fa:16:3e:2c:50:54", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.76", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbe0af68-c9", "ovs_interfaceid": "bbe0af68-c9d2-4b14-854b-b5355d9ef899", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:26:49 compute-0 nova_compute[188777]: 2026-02-19 20:26:49.672 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Releasing lock "refresh_cache-1cda3ab8-0805-4bcd-955c-996994fd3cb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:26:49 compute-0 nova_compute[188777]: 2026-02-19 20:26:49.673 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 19 20:26:49 compute-0 nova_compute[188777]: 2026-02-19 20:26:49.674 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:26:49 compute-0 nova_compute[188777]: 2026-02-19 20:26:49.675 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:26:49 compute-0 nova_compute[188777]: 2026-02-19 20:26:49.676 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:26:50 compute-0 podman[248080]: 2026-02-19 20:26:50.466941537 +0000 UTC m=+0.136552826 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 19 20:26:51 compute-0 nova_compute[188777]: 2026-02-19 20:26:51.475 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:26:52 compute-0 nova_compute[188777]: 2026-02-19 20:26:52.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:26:53 compute-0 ovn_controller[98843]: 2026-02-19T20:26:53Z|00057|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Feb 19 20:26:53 compute-0 nova_compute[188777]: 2026-02-19 20:26:53.045 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:26:53 compute-0 nova_compute[188777]: 2026-02-19 20:26:53.255 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:26:58 compute-0 nova_compute[188777]: 2026-02-19 20:26:58.048 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:26:58 compute-0 nova_compute[188777]: 2026-02-19 20:26:58.259 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:26:59 compute-0 podman[204724]: time="2026-02-19T20:26:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:26:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:26:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 19 20:26:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:26:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4372 "" "Go-http-client/1.1"
Feb 19 20:26:59 compute-0 sshd-session[248105]: Invalid user user from 103.250.11.249 port 36200
Feb 19 20:27:00 compute-0 sshd-session[248105]: Received disconnect from 103.250.11.249 port 36200:11: Bye Bye [preauth]
Feb 19 20:27:00 compute-0 sshd-session[248105]: Disconnected from invalid user user 103.250.11.249 port 36200 [preauth]
Feb 19 20:27:01 compute-0 openstack_network_exporter[207898]: ERROR   20:27:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:27:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:27:01 compute-0 openstack_network_exporter[207898]: ERROR   20:27:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:27:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:27:03 compute-0 nova_compute[188777]: 2026-02-19 20:27:03.050 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:27:03 compute-0 nova_compute[188777]: 2026-02-19 20:27:03.261 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:27:04 compute-0 nova_compute[188777]: 2026-02-19 20:27:04.145 188781 DEBUG oslo_concurrency.lockutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "61de29f2-275d-4f98-bb19-ef0063b0b709" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:27:04 compute-0 nova_compute[188777]: 2026-02-19 20:27:04.146 188781 DEBUG oslo_concurrency.lockutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "61de29f2-275d-4f98-bb19-ef0063b0b709" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:27:04 compute-0 nova_compute[188777]: 2026-02-19 20:27:04.162 188781 DEBUG nova.compute.manager [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 19 20:27:04 compute-0 nova_compute[188777]: 2026-02-19 20:27:04.244 188781 DEBUG oslo_concurrency.lockutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:27:04 compute-0 nova_compute[188777]: 2026-02-19 20:27:04.245 188781 DEBUG oslo_concurrency.lockutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:27:04 compute-0 nova_compute[188777]: 2026-02-19 20:27:04.254 188781 DEBUG nova.virt.hardware [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 19 20:27:04 compute-0 nova_compute[188777]: 2026-02-19 20:27:04.254 188781 INFO nova.compute.claims [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Claim successful on node compute-0.ctlplane.example.com
Feb 19 20:27:04 compute-0 nova_compute[188777]: 2026-02-19 20:27:04.390 188781 DEBUG nova.compute.provider_tree [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:27:04 compute-0 nova_compute[188777]: 2026-02-19 20:27:04.410 188781 DEBUG nova.scheduler.client.report [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:27:04 compute-0 podman[248107]: 2026-02-19 20:27:04.420502501 +0000 UTC m=+0.095233062 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1770267347, vcs-type=git, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, container_name=openstack_network_exporter, distribution-scope=public, name=ubi9/ubi-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., version=9.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible)
Feb 19 20:27:04 compute-0 podman[248108]: 2026-02-19 20:27:04.426146026 +0000 UTC m=+0.103761207 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 19 20:27:04 compute-0 nova_compute[188777]: 2026-02-19 20:27:04.438 188781 DEBUG oslo_concurrency.lockutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.193s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:27:04 compute-0 nova_compute[188777]: 2026-02-19 20:27:04.439 188781 DEBUG nova.compute.manager [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 19 20:27:04 compute-0 nova_compute[188777]: 2026-02-19 20:27:04.478 188781 DEBUG nova.compute.manager [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Feb 19 20:27:04 compute-0 nova_compute[188777]: 2026-02-19 20:27:04.531 188781 INFO nova.virt.libvirt.driver [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 19 20:27:04 compute-0 nova_compute[188777]: 2026-02-19 20:27:04.637 188781 DEBUG nova.compute.manager [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 19 20:27:04 compute-0 nova_compute[188777]: 2026-02-19 20:27:04.725 188781 DEBUG nova.compute.manager [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 19 20:27:04 compute-0 nova_compute[188777]: 2026-02-19 20:27:04.727 188781 DEBUG nova.virt.libvirt.driver [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 19 20:27:04 compute-0 nova_compute[188777]: 2026-02-19 20:27:04.730 188781 INFO nova.virt.libvirt.driver [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Creating image(s)
Feb 19 20:27:04 compute-0 nova_compute[188777]: 2026-02-19 20:27:04.731 188781 DEBUG oslo_concurrency.lockutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "/var/lib/nova/instances/61de29f2-275d-4f98-bb19-ef0063b0b709/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:27:04 compute-0 nova_compute[188777]: 2026-02-19 20:27:04.732 188781 DEBUG oslo_concurrency.lockutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "/var/lib/nova/instances/61de29f2-275d-4f98-bb19-ef0063b0b709/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:27:04 compute-0 nova_compute[188777]: 2026-02-19 20:27:04.733 188781 DEBUG oslo_concurrency.lockutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "/var/lib/nova/instances/61de29f2-275d-4f98-bb19-ef0063b0b709/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:27:04 compute-0 nova_compute[188777]: 2026-02-19 20:27:04.734 188781 DEBUG oslo_concurrency.lockutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "9c0494affe141b25c092c57304050b881d108640" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:27:04 compute-0 nova_compute[188777]: 2026-02-19 20:27:04.734 188781 DEBUG oslo_concurrency.lockutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "9c0494affe141b25c092c57304050b881d108640" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:27:05 compute-0 nova_compute[188777]: 2026-02-19 20:27:05.728 188781 DEBUG oslo_concurrency.processutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9c0494affe141b25c092c57304050b881d108640.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:27:05 compute-0 nova_compute[188777]: 2026-02-19 20:27:05.788 188781 DEBUG oslo_concurrency.processutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9c0494affe141b25c092c57304050b881d108640.part --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:27:05 compute-0 nova_compute[188777]: 2026-02-19 20:27:05.790 188781 DEBUG nova.virt.images [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] 5e9d1c50-cac1-48d4-87a6-109f03376fee was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Feb 19 20:27:05 compute-0 nova_compute[188777]: 2026-02-19 20:27:05.793 188781 DEBUG nova.privsep.utils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Feb 19 20:27:05 compute-0 nova_compute[188777]: 2026-02-19 20:27:05.794 188781 DEBUG oslo_concurrency.processutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/9c0494affe141b25c092c57304050b881d108640.part /var/lib/nova/instances/_base/9c0494affe141b25c092c57304050b881d108640.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:27:05 compute-0 nova_compute[188777]: 2026-02-19 20:27:05.991 188781 DEBUG oslo_concurrency.processutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/9c0494affe141b25c092c57304050b881d108640.part /var/lib/nova/instances/_base/9c0494affe141b25c092c57304050b881d108640.converted" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:27:05 compute-0 nova_compute[188777]: 2026-02-19 20:27:05.994 188781 DEBUG oslo_concurrency.processutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9c0494affe141b25c092c57304050b881d108640.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.053 188781 DEBUG oslo_concurrency.processutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9c0494affe141b25c092c57304050b881d108640.converted --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.055 188781 DEBUG oslo_concurrency.lockutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "9c0494affe141b25c092c57304050b881d108640" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.320s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.069 188781 DEBUG oslo_concurrency.processutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9c0494affe141b25c092c57304050b881d108640 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.112 188781 DEBUG oslo_concurrency.processutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9c0494affe141b25c092c57304050b881d108640 --force-share --output=json" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.113 188781 DEBUG oslo_concurrency.lockutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "9c0494affe141b25c092c57304050b881d108640" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.114 188781 DEBUG oslo_concurrency.lockutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "9c0494affe141b25c092c57304050b881d108640" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.125 188781 DEBUG oslo_concurrency.processutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9c0494affe141b25c092c57304050b881d108640 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.171 188781 DEBUG oslo_concurrency.processutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9c0494affe141b25c092c57304050b881d108640 --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.172 188781 DEBUG oslo_concurrency.processutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/9c0494affe141b25c092c57304050b881d108640,backing_fmt=raw /var/lib/nova/instances/61de29f2-275d-4f98-bb19-ef0063b0b709/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.202 188781 DEBUG oslo_concurrency.processutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/9c0494affe141b25c092c57304050b881d108640,backing_fmt=raw /var/lib/nova/instances/61de29f2-275d-4f98-bb19-ef0063b0b709/disk 1073741824" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.204 188781 DEBUG oslo_concurrency.lockutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "9c0494affe141b25c092c57304050b881d108640" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.090s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.205 188781 DEBUG oslo_concurrency.processutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9c0494affe141b25c092c57304050b881d108640 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.251 188781 DEBUG oslo_concurrency.processutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9c0494affe141b25c092c57304050b881d108640 --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.252 188781 DEBUG nova.virt.disk.api [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Checking if we can resize image /var/lib/nova/instances/61de29f2-275d-4f98-bb19-ef0063b0b709/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.253 188781 DEBUG oslo_concurrency.processutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/61de29f2-275d-4f98-bb19-ef0063b0b709/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.301 188781 DEBUG oslo_concurrency.processutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/61de29f2-275d-4f98-bb19-ef0063b0b709/disk --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.302 188781 DEBUG nova.virt.disk.api [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Cannot resize image /var/lib/nova/instances/61de29f2-275d-4f98-bb19-ef0063b0b709/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.303 188781 DEBUG nova.objects.instance [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lazy-loading 'migration_context' on Instance uuid 61de29f2-275d-4f98-bb19-ef0063b0b709 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.321 188781 DEBUG oslo_concurrency.lockutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "/var/lib/nova/instances/61de29f2-275d-4f98-bb19-ef0063b0b709/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.321 188781 DEBUG oslo_concurrency.lockutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "/var/lib/nova/instances/61de29f2-275d-4f98-bb19-ef0063b0b709/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.323 188781 DEBUG oslo_concurrency.lockutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "/var/lib/nova/instances/61de29f2-275d-4f98-bb19-ef0063b0b709/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.341 188781 DEBUG oslo_concurrency.processutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.389 188781 DEBUG oslo_concurrency.processutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.390 188781 DEBUG oslo_concurrency.lockutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.392 188781 DEBUG oslo_concurrency.lockutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.418 188781 DEBUG oslo_concurrency.processutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.463 188781 DEBUG oslo_concurrency.processutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.464 188781 DEBUG oslo_concurrency.processutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/61de29f2-275d-4f98-bb19-ef0063b0b709/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.497 188781 DEBUG oslo_concurrency.processutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/61de29f2-275d-4f98-bb19-ef0063b0b709/disk.eph0 1073741824" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.498 188781 DEBUG oslo_concurrency.lockutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.499 188781 DEBUG oslo_concurrency.processutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.546 188781 DEBUG oslo_concurrency.processutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.547 188781 DEBUG nova.virt.libvirt.driver [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.547 188781 DEBUG nova.virt.libvirt.driver [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Ensure instance console log exists: /var/lib/nova/instances/61de29f2-275d-4f98-bb19-ef0063b0b709/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.548 188781 DEBUG oslo_concurrency.lockutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.549 188781 DEBUG oslo_concurrency.lockutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.549 188781 DEBUG oslo_concurrency.lockutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.552 188781 DEBUG nova.virt.libvirt.driver [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-02-19T20:26:52Z,direct_url=<?>,disk_format='qcow2',id=5e9d1c50-cac1-48d4-87a6-109f03376fee,min_disk=0,min_ram=0,name='fvt_testing_image',owner='59f01dee51a74ac1a9f82733f591827d',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-02-19T20:26:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'encryption_secret_uuid': None, 'image_id': '5e9d1c50-cac1-48d4-87a6-109f03376fee'}], 'ephemerals': [{'device_name': '/dev/vdb', 'guest_format': None, 'size': 1, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'encryption_options': None, 'encryption_secret_uuid': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.559 188781 WARNING nova.virt.libvirt.driver [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.568 188781 DEBUG nova.virt.libvirt.host [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.569 188781 DEBUG nova.virt.libvirt.host [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.574 188781 DEBUG nova.virt.libvirt.host [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.574 188781 DEBUG nova.virt.libvirt.host [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.575 188781 DEBUG nova.virt.libvirt.driver [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.575 188781 DEBUG nova.virt.hardware [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-19T20:26:59Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='4c8196e0-1231-4751-937d-a7c927b0d2f3',id=2,is_public=True,memory_mb=512,name='fvt_testing_flavor',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-02-19T20:26:52Z,direct_url=<?>,disk_format='qcow2',id=5e9d1c50-cac1-48d4-87a6-109f03376fee,min_disk=0,min_ram=0,name='fvt_testing_image',owner='59f01dee51a74ac1a9f82733f591827d',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-02-19T20:26:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.576 188781 DEBUG nova.virt.hardware [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.576 188781 DEBUG nova.virt.hardware [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.576 188781 DEBUG nova.virt.hardware [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.576 188781 DEBUG nova.virt.hardware [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.576 188781 DEBUG nova.virt.hardware [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.577 188781 DEBUG nova.virt.hardware [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.577 188781 DEBUG nova.virt.hardware [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.577 188781 DEBUG nova.virt.hardware [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.577 188781 DEBUG nova.virt.hardware [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.578 188781 DEBUG nova.virt.hardware [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.582 188781 DEBUG nova.objects.instance [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lazy-loading 'pci_devices' on Instance uuid 61de29f2-275d-4f98-bb19-ef0063b0b709 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.601 188781 DEBUG nova.virt.libvirt.driver [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] End _get_guest_xml xml=<domain type="kvm">
Feb 19 20:27:06 compute-0 nova_compute[188777]:   <uuid>61de29f2-275d-4f98-bb19-ef0063b0b709</uuid>
Feb 19 20:27:06 compute-0 nova_compute[188777]:   <name>instance-00000005</name>
Feb 19 20:27:06 compute-0 nova_compute[188777]:   <memory>524288</memory>
Feb 19 20:27:06 compute-0 nova_compute[188777]:   <vcpu>1</vcpu>
Feb 19 20:27:06 compute-0 nova_compute[188777]:   <metadata>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 19 20:27:06 compute-0 nova_compute[188777]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:       <nova:name>fvt_testing_server</nova:name>
Feb 19 20:27:06 compute-0 nova_compute[188777]:       <nova:creationTime>2026-02-19 20:27:06</nova:creationTime>
Feb 19 20:27:06 compute-0 nova_compute[188777]:       <nova:flavor name="fvt_testing_flavor">
Feb 19 20:27:06 compute-0 nova_compute[188777]:         <nova:memory>512</nova:memory>
Feb 19 20:27:06 compute-0 nova_compute[188777]:         <nova:disk>1</nova:disk>
Feb 19 20:27:06 compute-0 nova_compute[188777]:         <nova:swap>0</nova:swap>
Feb 19 20:27:06 compute-0 nova_compute[188777]:         <nova:ephemeral>1</nova:ephemeral>
Feb 19 20:27:06 compute-0 nova_compute[188777]:         <nova:vcpus>1</nova:vcpus>
Feb 19 20:27:06 compute-0 nova_compute[188777]:       </nova:flavor>
Feb 19 20:27:06 compute-0 nova_compute[188777]:       <nova:owner>
Feb 19 20:27:06 compute-0 nova_compute[188777]:         <nova:user uuid="9f5597a45dc34ee19bcfe938afde768f">admin</nova:user>
Feb 19 20:27:06 compute-0 nova_compute[188777]:         <nova:project uuid="59f01dee51a74ac1a9f82733f591827d">admin</nova:project>
Feb 19 20:27:06 compute-0 nova_compute[188777]:       </nova:owner>
Feb 19 20:27:06 compute-0 nova_compute[188777]:       <nova:root type="image" uuid="5e9d1c50-cac1-48d4-87a6-109f03376fee"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:       <nova:ports/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     </nova:instance>
Feb 19 20:27:06 compute-0 nova_compute[188777]:   </metadata>
Feb 19 20:27:06 compute-0 nova_compute[188777]:   <sysinfo type="smbios">
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <system>
Feb 19 20:27:06 compute-0 nova_compute[188777]:       <entry name="manufacturer">RDO</entry>
Feb 19 20:27:06 compute-0 nova_compute[188777]:       <entry name="product">OpenStack Compute</entry>
Feb 19 20:27:06 compute-0 nova_compute[188777]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 19 20:27:06 compute-0 nova_compute[188777]:       <entry name="serial">61de29f2-275d-4f98-bb19-ef0063b0b709</entry>
Feb 19 20:27:06 compute-0 nova_compute[188777]:       <entry name="uuid">61de29f2-275d-4f98-bb19-ef0063b0b709</entry>
Feb 19 20:27:06 compute-0 nova_compute[188777]:       <entry name="family">Virtual Machine</entry>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     </system>
Feb 19 20:27:06 compute-0 nova_compute[188777]:   </sysinfo>
Feb 19 20:27:06 compute-0 nova_compute[188777]:   <os>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <boot dev="hd"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <smbios mode="sysinfo"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:   </os>
Feb 19 20:27:06 compute-0 nova_compute[188777]:   <features>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <acpi/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <apic/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <vmcoreinfo/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:   </features>
Feb 19 20:27:06 compute-0 nova_compute[188777]:   <clock offset="utc">
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <timer name="pit" tickpolicy="delay"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <timer name="hpet" present="no"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:   </clock>
Feb 19 20:27:06 compute-0 nova_compute[188777]:   <cpu mode="host-model" match="exact">
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <topology sockets="1" cores="1" threads="1"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:   </cpu>
Feb 19 20:27:06 compute-0 nova_compute[188777]:   <devices>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <disk type="file" device="disk">
Feb 19 20:27:06 compute-0 nova_compute[188777]:       <driver name="qemu" type="qcow2" cache="none"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:       <source file="/var/lib/nova/instances/61de29f2-275d-4f98-bb19-ef0063b0b709/disk"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:       <target dev="vda" bus="virtio"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     </disk>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <disk type="file" device="disk">
Feb 19 20:27:06 compute-0 nova_compute[188777]:       <driver name="qemu" type="qcow2" cache="none"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:       <source file="/var/lib/nova/instances/61de29f2-275d-4f98-bb19-ef0063b0b709/disk.eph0"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:       <target dev="vdb" bus="virtio"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     </disk>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <disk type="file" device="cdrom">
Feb 19 20:27:06 compute-0 nova_compute[188777]:       <driver name="qemu" type="raw" cache="none"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:       <source file="/var/lib/nova/instances/61de29f2-275d-4f98-bb19-ef0063b0b709/disk.config"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:       <target dev="sda" bus="sata"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     </disk>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <serial type="pty">
Feb 19 20:27:06 compute-0 nova_compute[188777]:       <log file="/var/lib/nova/instances/61de29f2-275d-4f98-bb19-ef0063b0b709/console.log" append="off"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     </serial>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <video>
Feb 19 20:27:06 compute-0 nova_compute[188777]:       <model type="virtio"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     </video>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <input type="tablet" bus="usb"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <rng model="virtio">
Feb 19 20:27:06 compute-0 nova_compute[188777]:       <backend model="random">/dev/urandom</backend>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     </rng>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <controller type="usb" index="0"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     <memballoon model="virtio">
Feb 19 20:27:06 compute-0 nova_compute[188777]:       <stats period="10"/>
Feb 19 20:27:06 compute-0 nova_compute[188777]:     </memballoon>
Feb 19 20:27:06 compute-0 nova_compute[188777]:   </devices>
Feb 19 20:27:06 compute-0 nova_compute[188777]: </domain>
Feb 19 20:27:06 compute-0 nova_compute[188777]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.669 188781 DEBUG nova.virt.libvirt.driver [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.669 188781 DEBUG nova.virt.libvirt.driver [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.669 188781 DEBUG nova.virt.libvirt.driver [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 19 20:27:06 compute-0 nova_compute[188777]: 2026-02-19 20:27:06.670 188781 INFO nova.virt.libvirt.driver [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Using config drive
Feb 19 20:27:07 compute-0 nova_compute[188777]: 2026-02-19 20:27:07.658 188781 INFO nova.virt.libvirt.driver [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Creating config drive at /var/lib/nova/instances/61de29f2-275d-4f98-bb19-ef0063b0b709/disk.config
Feb 19 20:27:07 compute-0 nova_compute[188777]: 2026-02-19 20:27:07.661 188781 DEBUG oslo_concurrency.processutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/61de29f2-275d-4f98-bb19-ef0063b0b709/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp00zuvfz0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:27:07 compute-0 nova_compute[188777]: 2026-02-19 20:27:07.782 188781 DEBUG oslo_concurrency.processutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/61de29f2-275d-4f98-bb19-ef0063b0b709/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp00zuvfz0" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:27:07 compute-0 systemd-machined[158158]: New machine qemu-5-instance-00000005.
Feb 19 20:27:07 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Feb 19 20:27:08 compute-0 nova_compute[188777]: 2026-02-19 20:27:08.052 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:27:08 compute-0 systemd[1]: Starting libvirt proxy daemon...
Feb 19 20:27:08 compute-0 systemd[1]: Started libvirt proxy daemon.
Feb 19 20:27:08 compute-0 nova_compute[188777]: 2026-02-19 20:27:08.264 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:27:08 compute-0 podman[248237]: 2026-02-19 20:27:08.567020501 +0000 UTC m=+0.113758608 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Feb 19 20:27:08 compute-0 nova_compute[188777]: 2026-02-19 20:27:08.574 188781 DEBUG nova.virt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Emitting event <LifecycleEvent: 1771532828.574259, 61de29f2-275d-4f98-bb19-ef0063b0b709 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:27:08 compute-0 nova_compute[188777]: 2026-02-19 20:27:08.575 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] VM Resumed (Lifecycle Event)
Feb 19 20:27:08 compute-0 nova_compute[188777]: 2026-02-19 20:27:08.586 188781 DEBUG nova.compute.manager [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 19 20:27:08 compute-0 nova_compute[188777]: 2026-02-19 20:27:08.587 188781 DEBUG nova.virt.libvirt.driver [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 19 20:27:08 compute-0 nova_compute[188777]: 2026-02-19 20:27:08.593 188781 INFO nova.virt.libvirt.driver [-] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Instance spawned successfully.
Feb 19 20:27:08 compute-0 nova_compute[188777]: 2026-02-19 20:27:08.593 188781 DEBUG nova.virt.libvirt.driver [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 19 20:27:08 compute-0 nova_compute[188777]: 2026-02-19 20:27:08.596 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:27:08 compute-0 nova_compute[188777]: 2026-02-19 20:27:08.603 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 19 20:27:08 compute-0 nova_compute[188777]: 2026-02-19 20:27:08.619 188781 DEBUG nova.virt.libvirt.driver [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:27:08 compute-0 nova_compute[188777]: 2026-02-19 20:27:08.619 188781 DEBUG nova.virt.libvirt.driver [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:27:08 compute-0 nova_compute[188777]: 2026-02-19 20:27:08.620 188781 DEBUG nova.virt.libvirt.driver [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:27:08 compute-0 nova_compute[188777]: 2026-02-19 20:27:08.620 188781 DEBUG nova.virt.libvirt.driver [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:27:08 compute-0 nova_compute[188777]: 2026-02-19 20:27:08.621 188781 DEBUG nova.virt.libvirt.driver [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:27:08 compute-0 nova_compute[188777]: 2026-02-19 20:27:08.621 188781 DEBUG nova.virt.libvirt.driver [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:27:08 compute-0 nova_compute[188777]: 2026-02-19 20:27:08.625 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 19 20:27:08 compute-0 nova_compute[188777]: 2026-02-19 20:27:08.625 188781 DEBUG nova.virt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Emitting event <LifecycleEvent: 1771532828.585981, 61de29f2-275d-4f98-bb19-ef0063b0b709 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:27:08 compute-0 nova_compute[188777]: 2026-02-19 20:27:08.625 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] VM Started (Lifecycle Event)
Feb 19 20:27:08 compute-0 nova_compute[188777]: 2026-02-19 20:27:08.647 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:27:08 compute-0 nova_compute[188777]: 2026-02-19 20:27:08.652 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 19 20:27:08 compute-0 nova_compute[188777]: 2026-02-19 20:27:08.692 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 19 20:27:08 compute-0 nova_compute[188777]: 2026-02-19 20:27:08.701 188781 INFO nova.compute.manager [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Took 3.98 seconds to spawn the instance on the hypervisor.
Feb 19 20:27:08 compute-0 nova_compute[188777]: 2026-02-19 20:27:08.701 188781 DEBUG nova.compute.manager [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:27:08 compute-0 nova_compute[188777]: 2026-02-19 20:27:08.761 188781 INFO nova.compute.manager [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Took 4.54 seconds to build instance.
Feb 19 20:27:08 compute-0 nova_compute[188777]: 2026-02-19 20:27:08.776 188781 DEBUG oslo_concurrency.lockutils [None req-73387053-bb06-4931-982b-da0c3f1983bc 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "61de29f2-275d-4f98-bb19-ef0063b0b709" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:27:12 compute-0 podman[248261]: 2026-02-19 20:27:12.397356082 +0000 UTC m=+0.079573714 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_managed=true, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:27:12 compute-0 podman[248260]: 2026-02-19 20:27:12.421990348 +0000 UTC m=+0.103012354 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, config_id=kepler, io.openshift.tags=base rhel9, vcs-type=git, version=9.4, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.buildah.version=1.29.0, io.openshift.expose-services=, name=ubi9)
Feb 19 20:27:13 compute-0 nova_compute[188777]: 2026-02-19 20:27:13.056 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:27:13 compute-0 nova_compute[188777]: 2026-02-19 20:27:13.268 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.142 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.143 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.143 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f68f6630>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.143 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fa4f6728800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.144 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f68f6630>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.144 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f68f6630>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.144 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f68f6630>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.144 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f68f6630>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.144 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f68f6630>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.145 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f68f6630>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.145 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f68f6630>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.145 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f68f6630>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.145 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f68f6630>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.145 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a390>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f68f6630>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.145 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f68f6630>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.145 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f68f6630>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.145 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f68f6630>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.145 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f68f6630>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.146 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f68f6630>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.146 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f68f6630>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.146 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f68f6630>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.146 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f68f6630>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.146 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f68f6630>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.146 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f68f6630>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.146 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f68f6630>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.147 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f68f6630>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.147 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f68f6630>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.147 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f68f6630>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.147 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f68f6630>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.152 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '5aaac42d-946d-4c6f-9bde-23b8b6613b59', 'name': 'test_0', 'flavor': {'id': '8030bc1a-9afb-4678-ac07-8b59a1275925', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e1a79c75-2fa3-410d-9c4c-91db3eeca51d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '59f01dee51a74ac1a9f82733f591827d', 'user_id': '9f5597a45dc34ee19bcfe938afde768f', 'hostId': 'fd9f80e206ee2256ddb900effab6d3e51f96886da6d1a8f886ddbab7', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.155 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 61de29f2-275d-4f98-bb19-ef0063b0b709 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.157 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/61de29f2-275d-4f98-bb19-ef0063b0b709 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}eb82bb0a04ff18fe5ce8169193b61d179e0542ea510a5cad5008c259e31f58a8" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Feb 19 20:27:15 compute-0 podman[248297]: 2026-02-19 20:27:15.419897948 +0000 UTC m=+0.096202682 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.734 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1572 Content-Type: application/json Date: Thu, 19 Feb 2026 20:27:15 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-a5149857-cb35-4d1c-b8c6-4512ebf8c758 x-openstack-request-id: req-a5149857-cb35-4d1c-b8c6-4512ebf8c758 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.735 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "61de29f2-275d-4f98-bb19-ef0063b0b709", "name": "fvt_testing_server", "status": "ACTIVE", "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "user_id": "9f5597a45dc34ee19bcfe938afde768f", "metadata": {}, "hostId": "fd9f80e206ee2256ddb900effab6d3e51f96886da6d1a8f886ddbab7", "image": {"id": "5e9d1c50-cac1-48d4-87a6-109f03376fee", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/5e9d1c50-cac1-48d4-87a6-109f03376fee"}]}, "flavor": {"id": "4c8196e0-1231-4751-937d-a7c927b0d2f3", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/4c8196e0-1231-4751-937d-a7c927b0d2f3"}]}, "created": "2026-02-19T20:27:03Z", "updated": "2026-02-19T20:27:08Z", "addresses": {}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/61de29f2-275d-4f98-bb19-ef0063b0b709"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/61de29f2-275d-4f98-bb19-ef0063b0b709"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-02-19T20:27:08.000000", "OS-SRV-USG:terminated_at": null, "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000005", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.735 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/61de29f2-275d-4f98-bb19-ef0063b0b709 used request id req-a5149857-cb35-4d1c-b8c6-4512ebf8c758 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.737 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '61de29f2-275d-4f98-bb19-ef0063b0b709', 'name': 'fvt_testing_server', 'flavor': {'id': '4c8196e0-1231-4751-937d-a7c927b0d2f3', 'name': 'fvt_testing_flavor', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '5e9d1c50-cac1-48d4-87a6-109f03376fee'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000005', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '59f01dee51a74ac1a9f82733f591827d', 'user_id': '9f5597a45dc34ee19bcfe938afde768f', 'hostId': 'fd9f80e206ee2256ddb900effab6d3e51f96886da6d1a8f886ddbab7', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.742 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1cda3ab8-0805-4bcd-955c-996994fd3cb4', 'name': 'vn-h4amqsx-jiq3zjubtpvr-5uw2ts4vboyi-vnf-jucboitrw5qp', 'flavor': {'id': '8030bc1a-9afb-4678-ac07-8b59a1275925', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e1a79c75-2fa3-410d-9c4c-91db3eeca51d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '59f01dee51a74ac1a9f82733f591827d', 'user_id': '9f5597a45dc34ee19bcfe938afde768f', 'hostId': 'fd9f80e206ee2256ddb900effab6d3e51f96886da6d1a8f886ddbab7', 'status': 'active', 'metadata': {'metering.server_group': '78adc0ea-8772-4283-8bd6-6dbdcecee09e'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.742 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.743 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.743 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.743 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.744 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-19T20:27:15.743379) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.750 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.759 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.760 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.761 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fa4f672a480>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.761 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.761 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.761 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.761 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.762 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.762 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: fvt_testing_server>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: fvt_testing_server>]
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.763 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fa4f672a180>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.763 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.763 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-02-19T20:27:15.761889) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.763 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.763 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.764 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.764 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-19T20:27:15.763976) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.764 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.764 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.765 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.765 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fa4f672bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.766 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.766 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.766 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.766 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.767 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-19T20:27:15.766858) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.767 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.767 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.768 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.768 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fa4f672a270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.768 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.769 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.769 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.769 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.769 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-19T20:27:15.769522) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.770 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.770 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.outgoing.bytes volume: 2398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.771 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.771 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fa4f6728ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.771 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.771 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.772 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.772 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.772 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-19T20:27:15.772246) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.792 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.827 15 DEBUG ceilometer.compute.pollsters [-] 61de29f2-275d-4f98-bb19-ef0063b0b709/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.847 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.848 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.848 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fa4f672a300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.849 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.849 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.849 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.849 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.850 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.850 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.851 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.852 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fa4f672ab70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.852 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.852 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.852 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.853 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.854 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-19T20:27:15.849759) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.854 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-19T20:27:15.853298) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.875 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.876 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.877 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.920 15 DEBUG ceilometer.compute.pollsters [-] 61de29f2-275d-4f98-bb19-ef0063b0b709/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.921 15 DEBUG ceilometer.compute.pollsters [-] 61de29f2-275d-4f98-bb19-ef0063b0b709/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.921 15 DEBUG ceilometer.compute.pollsters [-] 61de29f2-275d-4f98-bb19-ef0063b0b709/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.947 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.948 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.949 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.950 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.950 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fa4f6728290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.950 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.950 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.950 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.951 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:27:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:15.951 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-19T20:27:15.951028) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.029 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.029 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.030 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.137 15 DEBUG ceilometer.compute.pollsters [-] 61de29f2-275d-4f98-bb19-ef0063b0b709/disk.device.read.bytes volume: 18348032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.137 15 DEBUG ceilometer.compute.pollsters [-] 61de29f2-275d-4f98-bb19-ef0063b0b709/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.138 15 DEBUG ceilometer.compute.pollsters [-] 61de29f2-275d-4f98-bb19-ef0063b0b709/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.243 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.243 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.243 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.244 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.244 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fa4f69216a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.244 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.244 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.244 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.245 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.245 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/cpu volume: 42610000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.245 15 DEBUG ceilometer.compute.pollsters [-] 61de29f2-275d-4f98-bb19-ef0063b0b709/cpu volume: 7030000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.245 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/cpu volume: 38760000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.246 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.246 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fa4f67286b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.246 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.246 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a390>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.246 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a390>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.246 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.246 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.246 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: fvt_testing_server>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: fvt_testing_server>]
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.247 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fa4f67283b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.247 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.247 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.247 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.247 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.247 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-19T20:27:16.244976) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.247 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.latency volume: 658474829 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.248 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.latency volume: 116712843 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.248 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-02-19T20:27:16.246583) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.248 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.latency volume: 151528840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.248 15 DEBUG ceilometer.compute.pollsters [-] 61de29f2-275d-4f98-bb19-ef0063b0b709/disk.device.read.latency volume: 575757059 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.248 15 DEBUG ceilometer.compute.pollsters [-] 61de29f2-275d-4f98-bb19-ef0063b0b709/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.249 15 DEBUG ceilometer.compute.pollsters [-] 61de29f2-275d-4f98-bb19-ef0063b0b709/disk.device.read.latency volume: 4376426 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.249 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.latency volume: 683601533 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.249 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.latency volume: 109290795 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.249 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.latency volume: 110141141 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.250 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.250 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fa4f672a120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.250 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.250 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.251 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.251 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.251 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.251 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.251 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.251 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fa4f672a1b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.252 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.252 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.252 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.252 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.252 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.252 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.253 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-19T20:27:16.247809) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.253 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.253 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fa4f6728410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.253 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-19T20:27:16.251086) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.253 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.253 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.253 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-19T20:27:16.252402) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.253 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.253 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.253 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.254 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.254 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.254 15 DEBUG ceilometer.compute.pollsters [-] 61de29f2-275d-4f98-bb19-ef0063b0b709/disk.device.read.requests volume: 573 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.254 15 DEBUG ceilometer.compute.pollsters [-] 61de29f2-275d-4f98-bb19-ef0063b0b709/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.255 15 DEBUG ceilometer.compute.pollsters [-] 61de29f2-275d-4f98-bb19-ef0063b0b709/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.255 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.255 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.255 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.256 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.256 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fa4f672a150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.256 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.256 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.256 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.256 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.257 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.257 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.257 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.257 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fa4f6728470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.257 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.258 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.258 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.258 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.258 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.258 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.258 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.259 15 DEBUG ceilometer.compute.pollsters [-] 61de29f2-275d-4f98-bb19-ef0063b0b709/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.259 15 DEBUG ceilometer.compute.pollsters [-] 61de29f2-275d-4f98-bb19-ef0063b0b709/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.259 15 DEBUG ceilometer.compute.pollsters [-] 61de29f2-275d-4f98-bb19-ef0063b0b709/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.259 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.260 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.260 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.260 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-19T20:27:16.253769) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.261 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.261 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fa4f68f6030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.261 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.261 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-19T20:27:16.256962) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.261 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.261 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.261 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.261 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-19T20:27:16.258323) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.261 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.262 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.262 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.262 15 DEBUG ceilometer.compute.pollsters [-] 61de29f2-275d-4f98-bb19-ef0063b0b709/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.262 15 DEBUG ceilometer.compute.pollsters [-] 61de29f2-275d-4f98-bb19-ef0063b0b709/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.263 15 DEBUG ceilometer.compute.pollsters [-] 61de29f2-275d-4f98-bb19-ef0063b0b709/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.263 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.263 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.263 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.264 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.264 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fa4f672ab10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.264 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.264 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.264 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.265 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.265 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.265 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.265 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.265 15 DEBUG ceilometer.compute.pollsters [-] 61de29f2-275d-4f98-bb19-ef0063b0b709/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.266 15 DEBUG ceilometer.compute.pollsters [-] 61de29f2-275d-4f98-bb19-ef0063b0b709/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.266 15 DEBUG ceilometer.compute.pollsters [-] 61de29f2-275d-4f98-bb19-ef0063b0b709/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.266 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.266 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.267 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.267 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.267 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fa4f6728500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.268 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.268 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.268 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.268 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.268 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.latency volume: 2413036213 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.268 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.latency volume: 10941917 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.268 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.269 15 DEBUG ceilometer.compute.pollsters [-] 61de29f2-275d-4f98-bb19-ef0063b0b709/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.269 15 DEBUG ceilometer.compute.pollsters [-] 61de29f2-275d-4f98-bb19-ef0063b0b709/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.269 15 DEBUG ceilometer.compute.pollsters [-] 61de29f2-275d-4f98-bb19-ef0063b0b709/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.270 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.latency volume: 1916273341 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.270 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.latency volume: 10533639 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.270 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.271 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.271 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fa4f672a0c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.271 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.271 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.271 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.272 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.272 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.272 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.272 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.272 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fa4f6728560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.273 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.273 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.273 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.273 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.273 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.273 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-19T20:27:16.261705) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.273 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.274 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-19T20:27:16.265011) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.274 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.274 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-19T20:27:16.268370) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.274 15 DEBUG ceilometer.compute.pollsters [-] 61de29f2-275d-4f98-bb19-ef0063b0b709/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.274 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-19T20:27:16.271985) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.274 15 DEBUG ceilometer.compute.pollsters [-] 61de29f2-275d-4f98-bb19-ef0063b0b709/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.274 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-19T20:27:16.273319) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.274 15 DEBUG ceilometer.compute.pollsters [-] 61de29f2-275d-4f98-bb19-ef0063b0b709/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.275 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.275 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.275 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.276 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.276 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fa4f67285c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.276 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.276 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.276 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.276 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.277 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.277 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fa4f6728620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.277 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.277 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.278 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.278 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.278 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-19T20:27:16.276770) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.279 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.279 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-19T20:27:16.278155) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.279 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fa4f672be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.279 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.279 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.279 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.279 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.279 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/memory.usage volume: 48.734375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.280 15 DEBUG ceilometer.compute.pollsters [-] 61de29f2-275d-4f98-bb19-ef0063b0b709/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.280 15 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 61de29f2-275d-4f98-bb19-ef0063b0b709: ceilometer.compute.pollsters.NoVolumeException
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.280 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/memory.usage volume: 48.92578125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.282 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.283 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fa4f672be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.283 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.283 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.284 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.284 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.284 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.285 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.286 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.287 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.288 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.288 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.288 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.288 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.288 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.290 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.290 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.290 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.290 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.290 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.290 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.290 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.290 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.290 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.291 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.291 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.291 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.292 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-19T20:27:16.279841) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:27:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:27:16.293 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-19T20:27:16.284080) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:27:17 compute-0 podman[248321]: 2026-02-19 20:27:17.444772899 +0000 UTC m=+0.111494657 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20260216, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Feb 19 20:27:18 compute-0 nova_compute[188777]: 2026-02-19 20:27:18.057 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:27:18 compute-0 nova_compute[188777]: 2026-02-19 20:27:18.270 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:27:21 compute-0 podman[248341]: 2026-02-19 20:27:21.455370285 +0000 UTC m=+0.136646949 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb 19 20:27:22 compute-0 nova_compute[188777]: 2026-02-19 20:27:22.878 188781 DEBUG oslo_concurrency.lockutils [None req-1f36b617-252c-4244-a9d3-334dd3c09b04 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "61de29f2-275d-4f98-bb19-ef0063b0b709" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:27:22 compute-0 nova_compute[188777]: 2026-02-19 20:27:22.880 188781 DEBUG oslo_concurrency.lockutils [None req-1f36b617-252c-4244-a9d3-334dd3c09b04 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "61de29f2-275d-4f98-bb19-ef0063b0b709" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:27:22 compute-0 nova_compute[188777]: 2026-02-19 20:27:22.881 188781 DEBUG oslo_concurrency.lockutils [None req-1f36b617-252c-4244-a9d3-334dd3c09b04 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "61de29f2-275d-4f98-bb19-ef0063b0b709-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:27:22 compute-0 nova_compute[188777]: 2026-02-19 20:27:22.883 188781 DEBUG oslo_concurrency.lockutils [None req-1f36b617-252c-4244-a9d3-334dd3c09b04 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "61de29f2-275d-4f98-bb19-ef0063b0b709-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:27:22 compute-0 nova_compute[188777]: 2026-02-19 20:27:22.884 188781 DEBUG oslo_concurrency.lockutils [None req-1f36b617-252c-4244-a9d3-334dd3c09b04 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "61de29f2-275d-4f98-bb19-ef0063b0b709-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:27:22 compute-0 nova_compute[188777]: 2026-02-19 20:27:22.887 188781 INFO nova.compute.manager [None req-1f36b617-252c-4244-a9d3-334dd3c09b04 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Terminating instance
Feb 19 20:27:22 compute-0 nova_compute[188777]: 2026-02-19 20:27:22.890 188781 DEBUG oslo_concurrency.lockutils [None req-1f36b617-252c-4244-a9d3-334dd3c09b04 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "refresh_cache-61de29f2-275d-4f98-bb19-ef0063b0b709" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:27:22 compute-0 nova_compute[188777]: 2026-02-19 20:27:22.891 188781 DEBUG oslo_concurrency.lockutils [None req-1f36b617-252c-4244-a9d3-334dd3c09b04 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquired lock "refresh_cache-61de29f2-275d-4f98-bb19-ef0063b0b709" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:27:22 compute-0 nova_compute[188777]: 2026-02-19 20:27:22.892 188781 DEBUG nova.network.neutron [None req-1f36b617-252c-4244-a9d3-334dd3c09b04 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 19 20:27:23 compute-0 nova_compute[188777]: 2026-02-19 20:27:23.060 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:27:23 compute-0 nova_compute[188777]: 2026-02-19 20:27:23.273 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:27:23 compute-0 nova_compute[188777]: 2026-02-19 20:27:23.706 188781 DEBUG nova.network.neutron [None req-1f36b617-252c-4244-a9d3-334dd3c09b04 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 19 20:27:23 compute-0 nova_compute[188777]: 2026-02-19 20:27:23.996 188781 DEBUG nova.network.neutron [None req-1f36b617-252c-4244-a9d3-334dd3c09b04 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:27:24 compute-0 nova_compute[188777]: 2026-02-19 20:27:24.015 188781 DEBUG oslo_concurrency.lockutils [None req-1f36b617-252c-4244-a9d3-334dd3c09b04 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Releasing lock "refresh_cache-61de29f2-275d-4f98-bb19-ef0063b0b709" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:27:24 compute-0 nova_compute[188777]: 2026-02-19 20:27:24.016 188781 DEBUG nova.compute.manager [None req-1f36b617-252c-4244-a9d3-334dd3c09b04 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 19 20:27:24 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Feb 19 20:27:24 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 16.493s CPU time.
Feb 19 20:27:24 compute-0 systemd-machined[158158]: Machine qemu-5-instance-00000005 terminated.
Feb 19 20:27:24 compute-0 nova_compute[188777]: 2026-02-19 20:27:24.271 188781 INFO nova.virt.libvirt.driver [-] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Instance destroyed successfully.
Feb 19 20:27:24 compute-0 nova_compute[188777]: 2026-02-19 20:27:24.272 188781 DEBUG nova.objects.instance [None req-1f36b617-252c-4244-a9d3-334dd3c09b04 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lazy-loading 'resources' on Instance uuid 61de29f2-275d-4f98-bb19-ef0063b0b709 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:27:24 compute-0 nova_compute[188777]: 2026-02-19 20:27:24.288 188781 INFO nova.virt.libvirt.driver [None req-1f36b617-252c-4244-a9d3-334dd3c09b04 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Deleting instance files /var/lib/nova/instances/61de29f2-275d-4f98-bb19-ef0063b0b709_del
Feb 19 20:27:24 compute-0 nova_compute[188777]: 2026-02-19 20:27:24.289 188781 INFO nova.virt.libvirt.driver [None req-1f36b617-252c-4244-a9d3-334dd3c09b04 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Deletion of /var/lib/nova/instances/61de29f2-275d-4f98-bb19-ef0063b0b709_del complete
Feb 19 20:27:24 compute-0 nova_compute[188777]: 2026-02-19 20:27:24.335 188781 INFO nova.compute.manager [None req-1f36b617-252c-4244-a9d3-334dd3c09b04 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Took 0.32 seconds to destroy the instance on the hypervisor.
Feb 19 20:27:24 compute-0 nova_compute[188777]: 2026-02-19 20:27:24.336 188781 DEBUG oslo.service.loopingcall [None req-1f36b617-252c-4244-a9d3-334dd3c09b04 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 19 20:27:24 compute-0 nova_compute[188777]: 2026-02-19 20:27:24.336 188781 DEBUG nova.compute.manager [-] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 19 20:27:24 compute-0 nova_compute[188777]: 2026-02-19 20:27:24.337 188781 DEBUG nova.network.neutron [-] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 19 20:27:24 compute-0 nova_compute[188777]: 2026-02-19 20:27:24.705 188781 DEBUG nova.network.neutron [-] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 19 20:27:24 compute-0 nova_compute[188777]: 2026-02-19 20:27:24.724 188781 DEBUG nova.network.neutron [-] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:27:24 compute-0 nova_compute[188777]: 2026-02-19 20:27:24.738 188781 INFO nova.compute.manager [-] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Took 0.40 seconds to deallocate network for instance.
Feb 19 20:27:24 compute-0 nova_compute[188777]: 2026-02-19 20:27:24.776 188781 DEBUG oslo_concurrency.lockutils [None req-1f36b617-252c-4244-a9d3-334dd3c09b04 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:27:24 compute-0 nova_compute[188777]: 2026-02-19 20:27:24.777 188781 DEBUG oslo_concurrency.lockutils [None req-1f36b617-252c-4244-a9d3-334dd3c09b04 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:27:24 compute-0 nova_compute[188777]: 2026-02-19 20:27:24.890 188781 DEBUG nova.compute.provider_tree [None req-1f36b617-252c-4244-a9d3-334dd3c09b04 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:27:24 compute-0 nova_compute[188777]: 2026-02-19 20:27:24.913 188781 DEBUG nova.scheduler.client.report [None req-1f36b617-252c-4244-a9d3-334dd3c09b04 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:27:24 compute-0 nova_compute[188777]: 2026-02-19 20:27:24.941 188781 DEBUG oslo_concurrency.lockutils [None req-1f36b617-252c-4244-a9d3-334dd3c09b04 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.164s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:27:24 compute-0 nova_compute[188777]: 2026-02-19 20:27:24.972 188781 INFO nova.scheduler.client.report [None req-1f36b617-252c-4244-a9d3-334dd3c09b04 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Deleted allocations for instance 61de29f2-275d-4f98-bb19-ef0063b0b709
Feb 19 20:27:25 compute-0 nova_compute[188777]: 2026-02-19 20:27:25.030 188781 DEBUG oslo_concurrency.lockutils [None req-1f36b617-252c-4244-a9d3-334dd3c09b04 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "61de29f2-275d-4f98-bb19-ef0063b0b709" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:27:28 compute-0 nova_compute[188777]: 2026-02-19 20:27:28.063 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:27:28 compute-0 nova_compute[188777]: 2026-02-19 20:27:28.277 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:27:29 compute-0 podman[204724]: time="2026-02-19T20:27:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:27:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:27:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 19 20:27:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:27:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4374 "" "Go-http-client/1.1"
Feb 19 20:27:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:27:30.440 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:27:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:27:30.442 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:27:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:27:30.443 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:27:31 compute-0 openstack_network_exporter[207898]: ERROR   20:27:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:27:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:27:31 compute-0 openstack_network_exporter[207898]: ERROR   20:27:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:27:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:27:33 compute-0 nova_compute[188777]: 2026-02-19 20:27:33.066 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:27:33 compute-0 nova_compute[188777]: 2026-02-19 20:27:33.280 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:27:35 compute-0 podman[248378]: 2026-02-19 20:27:35.436431584 +0000 UTC m=+0.103649433 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., container_name=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., org.opencontainers.image.created=2026-02-05T04:57:10Z, build-date=2026-02-05T04:57:10Z, distribution-scope=public, managed_by=edpm_ansible, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.7, com.redhat.component=ubi9-minimal-container, release=1770267347, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, architecture=x86_64)
Feb 19 20:27:35 compute-0 podman[248379]: 2026-02-19 20:27:35.439809979 +0000 UTC m=+0.101728534 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 19 20:27:38 compute-0 nova_compute[188777]: 2026-02-19 20:27:38.068 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:27:38 compute-0 nova_compute[188777]: 2026-02-19 20:27:38.282 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:27:39 compute-0 nova_compute[188777]: 2026-02-19 20:27:39.267 188781 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1771532844.265654, 61de29f2-275d-4f98-bb19-ef0063b0b709 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:27:39 compute-0 nova_compute[188777]: 2026-02-19 20:27:39.268 188781 INFO nova.compute.manager [-] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] VM Stopped (Lifecycle Event)
Feb 19 20:27:39 compute-0 nova_compute[188777]: 2026-02-19 20:27:39.290 188781 DEBUG nova.compute.manager [None req-e5843203-1b31-4517-aaef-1ec71a78f80f - - - - - -] [instance: 61de29f2-275d-4f98-bb19-ef0063b0b709] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:27:39 compute-0 podman[248422]: 2026-02-19 20:27:39.416898841 +0000 UTC m=+0.098875555 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Feb 19 20:27:40 compute-0 nova_compute[188777]: 2026-02-19 20:27:40.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:27:40 compute-0 nova_compute[188777]: 2026-02-19 20:27:40.266 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:27:43 compute-0 nova_compute[188777]: 2026-02-19 20:27:43.071 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:27:43 compute-0 nova_compute[188777]: 2026-02-19 20:27:43.285 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:27:43 compute-0 podman[248442]: 2026-02-19 20:27:43.440213092 +0000 UTC m=+0.119237648 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, container_name=kepler, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, vendor=Red Hat, Inc., io.buildah.version=1.29.0, io.openshift.tags=base rhel9)
Feb 19 20:27:43 compute-0 podman[248443]: 2026-02-19 20:27:43.448605923 +0000 UTC m=+0.120949501 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi)
Feb 19 20:27:43 compute-0 sshd-session[248440]: Invalid user x from 83.235.16.111 port 52802
Feb 19 20:27:43 compute-0 sshd-session[248440]: Received disconnect from 83.235.16.111 port 52802:11: Bye Bye [preauth]
Feb 19 20:27:43 compute-0 sshd-session[248440]: Disconnected from invalid user x 83.235.16.111 port 52802 [preauth]
Feb 19 20:27:45 compute-0 nova_compute[188777]: 2026-02-19 20:27:45.260 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:27:46 compute-0 podman[248480]: 2026-02-19 20:27:46.38332928 +0000 UTC m=+0.061546134 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 19 20:27:47 compute-0 nova_compute[188777]: 2026-02-19 20:27:47.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:27:48 compute-0 nova_compute[188777]: 2026-02-19 20:27:48.073 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:27:48 compute-0 nova_compute[188777]: 2026-02-19 20:27:48.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:27:48 compute-0 nova_compute[188777]: 2026-02-19 20:27:48.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:27:48 compute-0 nova_compute[188777]: 2026-02-19 20:27:48.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:27:48 compute-0 nova_compute[188777]: 2026-02-19 20:27:48.287 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:27:48 compute-0 nova_compute[188777]: 2026-02-19 20:27:48.308 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:27:48 compute-0 nova_compute[188777]: 2026-02-19 20:27:48.309 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:27:48 compute-0 nova_compute[188777]: 2026-02-19 20:27:48.310 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:27:48 compute-0 nova_compute[188777]: 2026-02-19 20:27:48.311 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:27:48 compute-0 nova_compute[188777]: 2026-02-19 20:27:48.410 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:27:48 compute-0 podman[248505]: 2026-02-19 20:27:48.424218178 +0000 UTC m=+0.105200402 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 19 20:27:48 compute-0 nova_compute[188777]: 2026-02-19 20:27:48.469 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:27:48 compute-0 nova_compute[188777]: 2026-02-19 20:27:48.470 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:27:48 compute-0 nova_compute[188777]: 2026-02-19 20:27:48.540 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:27:48 compute-0 nova_compute[188777]: 2026-02-19 20:27:48.541 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:27:48 compute-0 sshd-session[247862]: Received disconnect from 38.102.83.176 port 45202:11: disconnected by user
Feb 19 20:27:48 compute-0 sshd-session[247862]: Disconnected from user zuul 38.102.83.176 port 45202
Feb 19 20:27:48 compute-0 sshd-session[247859]: pam_unix(sshd:session): session closed for user zuul
Feb 19 20:27:48 compute-0 systemd[1]: session-29.scope: Deactivated successfully.
Feb 19 20:27:48 compute-0 systemd[1]: session-29.scope: Consumed 1.023s CPU time.
Feb 19 20:27:48 compute-0 systemd-logind[810]: Session 29 logged out. Waiting for processes to exit.
Feb 19 20:27:48 compute-0 systemd-logind[810]: Removed session 29.
Feb 19 20:27:48 compute-0 nova_compute[188777]: 2026-02-19 20:27:48.619 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:27:48 compute-0 nova_compute[188777]: 2026-02-19 20:27:48.621 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:27:48 compute-0 nova_compute[188777]: 2026-02-19 20:27:48.685 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:27:48 compute-0 nova_compute[188777]: 2026-02-19 20:27:48.697 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:27:48 compute-0 nova_compute[188777]: 2026-02-19 20:27:48.749 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:27:48 compute-0 nova_compute[188777]: 2026-02-19 20:27:48.750 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:27:48 compute-0 nova_compute[188777]: 2026-02-19 20:27:48.832 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:27:48 compute-0 nova_compute[188777]: 2026-02-19 20:27:48.834 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:27:48 compute-0 nova_compute[188777]: 2026-02-19 20:27:48.904 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:27:48 compute-0 nova_compute[188777]: 2026-02-19 20:27:48.904 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:27:48 compute-0 nova_compute[188777]: 2026-02-19 20:27:48.951 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0 --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:27:49 compute-0 nova_compute[188777]: 2026-02-19 20:27:49.323 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:27:49 compute-0 nova_compute[188777]: 2026-02-19 20:27:49.324 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4927MB free_disk=72.19921493530273GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:27:49 compute-0 nova_compute[188777]: 2026-02-19 20:27:49.324 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:27:49 compute-0 nova_compute[188777]: 2026-02-19 20:27:49.324 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:27:49 compute-0 nova_compute[188777]: 2026-02-19 20:27:49.414 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 5aaac42d-946d-4c6f-9bde-23b8b6613b59 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:27:49 compute-0 nova_compute[188777]: 2026-02-19 20:27:49.414 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 1cda3ab8-0805-4bcd-955c-996994fd3cb4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:27:49 compute-0 nova_compute[188777]: 2026-02-19 20:27:49.414 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:27:49 compute-0 nova_compute[188777]: 2026-02-19 20:27:49.414 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:27:49 compute-0 nova_compute[188777]: 2026-02-19 20:27:49.490 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:27:49 compute-0 nova_compute[188777]: 2026-02-19 20:27:49.505 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:27:49 compute-0 nova_compute[188777]: 2026-02-19 20:27:49.537 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:27:49 compute-0 nova_compute[188777]: 2026-02-19 20:27:49.537 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.213s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:27:51 compute-0 nova_compute[188777]: 2026-02-19 20:27:51.537 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:27:51 compute-0 nova_compute[188777]: 2026-02-19 20:27:51.537 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:27:51 compute-0 nova_compute[188777]: 2026-02-19 20:27:51.538 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 19 20:27:51 compute-0 nova_compute[188777]: 2026-02-19 20:27:51.857 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "refresh_cache-5aaac42d-946d-4c6f-9bde-23b8b6613b59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:27:51 compute-0 nova_compute[188777]: 2026-02-19 20:27:51.857 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquired lock "refresh_cache-5aaac42d-946d-4c6f-9bde-23b8b6613b59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:27:51 compute-0 nova_compute[188777]: 2026-02-19 20:27:51.857 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 19 20:27:51 compute-0 nova_compute[188777]: 2026-02-19 20:27:51.858 188781 DEBUG nova.objects.instance [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5aaac42d-946d-4c6f-9bde-23b8b6613b59 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:27:52 compute-0 podman[248549]: 2026-02-19 20:27:52.444759289 +0000 UTC m=+0.126375509 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 19 20:27:53 compute-0 nova_compute[188777]: 2026-02-19 20:27:53.019 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Updating instance_info_cache with network_info: [{"id": "10027d6c-43cc-4a7c-be42-a49c8c914f25", "address": "fa:16:3e:e4:9e:14", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.193", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10027d6c-43", "ovs_interfaceid": "10027d6c-43cc-4a7c-be42-a49c8c914f25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:27:53 compute-0 nova_compute[188777]: 2026-02-19 20:27:53.037 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Releasing lock "refresh_cache-5aaac42d-946d-4c6f-9bde-23b8b6613b59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:27:53 compute-0 nova_compute[188777]: 2026-02-19 20:27:53.038 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 19 20:27:53 compute-0 nova_compute[188777]: 2026-02-19 20:27:53.039 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:27:53 compute-0 nova_compute[188777]: 2026-02-19 20:27:53.077 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:27:53 compute-0 nova_compute[188777]: 2026-02-19 20:27:53.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:27:53 compute-0 nova_compute[188777]: 2026-02-19 20:27:53.289 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:27:58 compute-0 nova_compute[188777]: 2026-02-19 20:27:58.079 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:27:58 compute-0 nova_compute[188777]: 2026-02-19 20:27:58.292 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:27:59 compute-0 podman[204724]: time="2026-02-19T20:27:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:27:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:27:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 19 20:27:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:27:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4374 "" "Go-http-client/1.1"
Feb 19 20:28:00 compute-0 sshd-session[248575]: Accepted publickey for zuul from 38.102.83.176 port 33252 ssh2: RSA SHA256:Tz8+J60H2NvCUNbrBLaXS+pTxQ8qAPOs7gJ/OpaGYjQ
Feb 19 20:28:00 compute-0 systemd-logind[810]: New session 30 of user zuul.
Feb 19 20:28:00 compute-0 systemd[1]: Started Session 30 of User zuul.
Feb 19 20:28:00 compute-0 sshd-session[248575]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 19 20:28:01 compute-0 sudo[248752]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vckeowautuwyoogvmazocrsjbwwwrcow ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771532880.7452698-59839-247046345132674/AnsiballZ_command.py'
Feb 19 20:28:01 compute-0 sudo[248752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:28:01 compute-0 openstack_network_exporter[207898]: ERROR   20:28:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:28:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:28:01 compute-0 openstack_network_exporter[207898]: ERROR   20:28:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:28:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:28:01 compute-0 python3[248755]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 20:28:01 compute-0 sudo[248752]: pam_unix(sudo:session): session closed for user root
Feb 19 20:28:03 compute-0 nova_compute[188777]: 2026-02-19 20:28:03.082 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:28:03 compute-0 nova_compute[188777]: 2026-02-19 20:28:03.295 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:28:06 compute-0 podman[248795]: 2026-02-19 20:28:06.412823782 +0000 UTC m=+0.088004517 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, distribution-scope=public, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, vcs-type=git, io.openshift.expose-services=, config_id=openstack_network_exporter, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, build-date=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, version=9.7, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Feb 19 20:28:06 compute-0 podman[248796]: 2026-02-19 20:28:06.43175411 +0000 UTC m=+0.108950238 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 19 20:28:08 compute-0 nova_compute[188777]: 2026-02-19 20:28:08.085 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:28:08 compute-0 nova_compute[188777]: 2026-02-19 20:28:08.298 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:28:08 compute-0 sudo[249011]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aolosmmqmtrcaoausouzdhctjxioabpo ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771532888.2597058-60006-269403541114660/AnsiballZ_command.py'
Feb 19 20:28:08 compute-0 sudo[249011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:28:08 compute-0 python3[249014]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep podman_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 20:28:09 compute-0 sudo[249011]: pam_unix(sudo:session): session closed for user root
Feb 19 20:28:10 compute-0 podman[249053]: 2026-02-19 20:28:10.409479303 +0000 UTC m=+0.089033559 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 19 20:28:13 compute-0 nova_compute[188777]: 2026-02-19 20:28:13.087 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:28:13 compute-0 nova_compute[188777]: 2026-02-19 20:28:13.300 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:28:14 compute-0 podman[249071]: 2026-02-19 20:28:14.410757849 +0000 UTC m=+0.087836831 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.29.0, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, config_id=kepler, vcs-type=git, distribution-scope=public, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, name=ubi9, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, maintainer=Red Hat, Inc.)
Feb 19 20:28:14 compute-0 podman[249072]: 2026-02-19 20:28:14.411515173 +0000 UTC m=+0.082446414 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 20:28:17 compute-0 podman[249108]: 2026-02-19 20:28:17.411577173 +0000 UTC m=+0.078540973 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 19 20:28:18 compute-0 nova_compute[188777]: 2026-02-19 20:28:18.089 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:28:18 compute-0 sudo[249305]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrvcaqsmuxbcorsjwhsgpubamghuthag ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771532897.6538227-60165-179641396518205/AnsiballZ_command.py'
Feb 19 20:28:18 compute-0 sudo[249305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:28:18 compute-0 python3[249308]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep kepler
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 20:28:18 compute-0 nova_compute[188777]: 2026-02-19 20:28:18.302 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:28:18 compute-0 sudo[249305]: pam_unix(sudo:session): session closed for user root
Feb 19 20:28:19 compute-0 podman[249347]: 2026-02-19 20:28:19.435610855 +0000 UTC m=+0.108999519 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:28:23 compute-0 nova_compute[188777]: 2026-02-19 20:28:23.091 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:28:23 compute-0 podman[249367]: 2026-02-19 20:28:23.261582538 +0000 UTC m=+0.132087618 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Feb 19 20:28:23 compute-0 nova_compute[188777]: 2026-02-19 20:28:23.304 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:28:25 compute-0 sshd-session[249394]: Invalid user sftptest from 154.12.80.151 port 40598
Feb 19 20:28:25 compute-0 sshd-session[249394]: Received disconnect from 154.12.80.151 port 40598:11: Bye Bye [preauth]
Feb 19 20:28:25 compute-0 sshd-session[249394]: Disconnected from invalid user sftptest 154.12.80.151 port 40598 [preauth]
Feb 19 20:28:28 compute-0 nova_compute[188777]: 2026-02-19 20:28:28.092 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:28:28 compute-0 nova_compute[188777]: 2026-02-19 20:28:28.306 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:28:29 compute-0 podman[204724]: time="2026-02-19T20:28:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:28:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:28:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 19 20:28:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:28:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4374 "" "Go-http-client/1.1"
Feb 19 20:28:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:28:30.442 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:28:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:28:30.443 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:28:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:28:30.444 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:28:31 compute-0 sshd-session[249396]: Received disconnect from 103.119.94.10 port 48918:11: Bye Bye [preauth]
Feb 19 20:28:31 compute-0 sshd-session[249396]: Disconnected from authenticating user root 103.119.94.10 port 48918 [preauth]
Feb 19 20:28:31 compute-0 openstack_network_exporter[207898]: ERROR   20:28:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:28:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:28:31 compute-0 openstack_network_exporter[207898]: ERROR   20:28:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:28:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:28:32 compute-0 sudo[249571]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhzgmzufbgtmgdgiyporloqqjbxlzsdl ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771532911.9465437-60387-46648356400187/AnsiballZ_command.py'
Feb 19 20:28:32 compute-0 sudo[249571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:28:32 compute-0 python3[249574]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep openstack_network_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 19 20:28:33 compute-0 sudo[249571]: pam_unix(sudo:session): session closed for user root
Feb 19 20:28:33 compute-0 nova_compute[188777]: 2026-02-19 20:28:33.094 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:28:33 compute-0 nova_compute[188777]: 2026-02-19 20:28:33.309 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:28:37 compute-0 podman[249614]: 2026-02-19 20:28:37.408557944 +0000 UTC m=+0.093401724 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 19 20:28:37 compute-0 podman[249613]: 2026-02-19 20:28:37.415037065 +0000 UTC m=+0.097510292 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, build-date=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, name=ubi9/ubi-minimal, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=ubi9-minimal-container)
Feb 19 20:28:38 compute-0 nova_compute[188777]: 2026-02-19 20:28:38.097 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:28:38 compute-0 nova_compute[188777]: 2026-02-19 20:28:38.312 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:28:41 compute-0 nova_compute[188777]: 2026-02-19 20:28:41.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:28:41 compute-0 nova_compute[188777]: 2026-02-19 20:28:41.265 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:28:41 compute-0 podman[249657]: 2026-02-19 20:28:41.394538965 +0000 UTC m=+0.072463384 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 19 20:28:43 compute-0 nova_compute[188777]: 2026-02-19 20:28:43.101 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:28:43 compute-0 sshd-session[249656]: Invalid user minecraft from 125.94.106.195 port 47600
Feb 19 20:28:43 compute-0 nova_compute[188777]: 2026-02-19 20:28:43.316 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:28:43 compute-0 sshd-session[249656]: Received disconnect from 125.94.106.195 port 47600:11: Bye Bye [preauth]
Feb 19 20:28:43 compute-0 sshd-session[249656]: Disconnected from invalid user minecraft 125.94.106.195 port 47600 [preauth]
Feb 19 20:28:44 compute-0 podman[249677]: 2026-02-19 20:28:44.757403102 +0000 UTC m=+0.089760130 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 19 20:28:44 compute-0 podman[249676]: 2026-02-19 20:28:44.761441658 +0000 UTC m=+0.092358552 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, name=ubi9, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, vcs-type=git, io.buildah.version=1.29.0, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543)
Feb 19 20:28:47 compute-0 nova_compute[188777]: 2026-02-19 20:28:47.261 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:28:47 compute-0 nova_compute[188777]: 2026-02-19 20:28:47.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:28:48 compute-0 nova_compute[188777]: 2026-02-19 20:28:48.103 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:28:48 compute-0 nova_compute[188777]: 2026-02-19 20:28:48.319 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:28:48 compute-0 podman[249718]: 2026-02-19 20:28:48.442435656 +0000 UTC m=+0.115286735 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 19 20:28:49 compute-0 nova_compute[188777]: 2026-02-19 20:28:49.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:28:50 compute-0 nova_compute[188777]: 2026-02-19 20:28:50.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:28:50 compute-0 nova_compute[188777]: 2026-02-19 20:28:50.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:28:50 compute-0 nova_compute[188777]: 2026-02-19 20:28:50.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:28:50 compute-0 nova_compute[188777]: 2026-02-19 20:28:50.309 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:28:50 compute-0 nova_compute[188777]: 2026-02-19 20:28:50.310 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:28:50 compute-0 nova_compute[188777]: 2026-02-19 20:28:50.310 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:28:50 compute-0 nova_compute[188777]: 2026-02-19 20:28:50.310 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:28:50 compute-0 podman[249743]: 2026-02-19 20:28:50.387667639 +0000 UTC m=+0.075346394 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2)
Feb 19 20:28:50 compute-0 nova_compute[188777]: 2026-02-19 20:28:50.404 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:28:50 compute-0 nova_compute[188777]: 2026-02-19 20:28:50.461 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:28:50 compute-0 nova_compute[188777]: 2026-02-19 20:28:50.462 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:28:50 compute-0 nova_compute[188777]: 2026-02-19 20:28:50.528 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:28:50 compute-0 nova_compute[188777]: 2026-02-19 20:28:50.529 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:28:50 compute-0 nova_compute[188777]: 2026-02-19 20:28:50.581 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:28:50 compute-0 nova_compute[188777]: 2026-02-19 20:28:50.582 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:28:51 compute-0 nova_compute[188777]: 2026-02-19 20:28:51.240 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json" returned: 0 in 0.658s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:28:51 compute-0 nova_compute[188777]: 2026-02-19 20:28:51.247 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:28:51 compute-0 nova_compute[188777]: 2026-02-19 20:28:51.296 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:28:51 compute-0 nova_compute[188777]: 2026-02-19 20:28:51.297 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:28:51 compute-0 nova_compute[188777]: 2026-02-19 20:28:51.344 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:28:51 compute-0 nova_compute[188777]: 2026-02-19 20:28:51.345 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:28:51 compute-0 nova_compute[188777]: 2026-02-19 20:28:51.402 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:28:51 compute-0 nova_compute[188777]: 2026-02-19 20:28:51.403 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:28:51 compute-0 nova_compute[188777]: 2026-02-19 20:28:51.451 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0 --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:28:51 compute-0 nova_compute[188777]: 2026-02-19 20:28:51.774 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:28:51 compute-0 nova_compute[188777]: 2026-02-19 20:28:51.775 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4920MB free_disk=72.19874572753906GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:28:51 compute-0 nova_compute[188777]: 2026-02-19 20:28:51.775 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:28:51 compute-0 nova_compute[188777]: 2026-02-19 20:28:51.776 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:28:51 compute-0 nova_compute[188777]: 2026-02-19 20:28:51.862 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 5aaac42d-946d-4c6f-9bde-23b8b6613b59 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:28:51 compute-0 nova_compute[188777]: 2026-02-19 20:28:51.863 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 1cda3ab8-0805-4bcd-955c-996994fd3cb4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:28:51 compute-0 nova_compute[188777]: 2026-02-19 20:28:51.864 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:28:51 compute-0 nova_compute[188777]: 2026-02-19 20:28:51.864 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:28:51 compute-0 nova_compute[188777]: 2026-02-19 20:28:51.892 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Refreshing inventories for resource provider c266959e-952e-41ad-bc2e-56513f39ec2d _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 19 20:28:51 compute-0 nova_compute[188777]: 2026-02-19 20:28:51.916 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Updating ProviderTree inventory for provider c266959e-952e-41ad-bc2e-56513f39ec2d from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 19 20:28:51 compute-0 nova_compute[188777]: 2026-02-19 20:28:51.916 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Updating inventory in ProviderTree for provider c266959e-952e-41ad-bc2e-56513f39ec2d with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 19 20:28:51 compute-0 nova_compute[188777]: 2026-02-19 20:28:51.939 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Refreshing aggregate associations for resource provider c266959e-952e-41ad-bc2e-56513f39ec2d, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 19 20:28:51 compute-0 nova_compute[188777]: 2026-02-19 20:28:51.982 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Refreshing trait associations for resource provider c266959e-952e-41ad-bc2e-56513f39ec2d, traits: HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,HW_CPU_X86_SHA,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NODE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,HW_CPU_X86_F16C,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_ABM,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_MMX,HW_CPU_X86_BMI2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 19 20:28:52 compute-0 nova_compute[188777]: 2026-02-19 20:28:52.067 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:28:52 compute-0 nova_compute[188777]: 2026-02-19 20:28:52.082 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:28:52 compute-0 nova_compute[188777]: 2026-02-19 20:28:52.083 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:28:52 compute-0 nova_compute[188777]: 2026-02-19 20:28:52.083 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.308s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:28:53 compute-0 nova_compute[188777]: 2026-02-19 20:28:53.106 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:28:53 compute-0 nova_compute[188777]: 2026-02-19 20:28:53.321 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:28:53 compute-0 podman[249787]: 2026-02-19 20:28:53.461662976 +0000 UTC m=+0.145561766 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:28:54 compute-0 nova_compute[188777]: 2026-02-19 20:28:54.079 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:28:54 compute-0 nova_compute[188777]: 2026-02-19 20:28:54.100 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:28:54 compute-0 nova_compute[188777]: 2026-02-19 20:28:54.101 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:28:55 compute-0 nova_compute[188777]: 2026-02-19 20:28:55.045 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "refresh_cache-1cda3ab8-0805-4bcd-955c-996994fd3cb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:28:55 compute-0 nova_compute[188777]: 2026-02-19 20:28:55.045 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquired lock "refresh_cache-1cda3ab8-0805-4bcd-955c-996994fd3cb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:28:55 compute-0 nova_compute[188777]: 2026-02-19 20:28:55.046 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 19 20:28:57 compute-0 nova_compute[188777]: 2026-02-19 20:28:57.318 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Updating instance_info_cache with network_info: [{"id": "bbe0af68-c9d2-4b14-854b-b5355d9ef899", "address": "fa:16:3e:2c:50:54", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.76", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbe0af68-c9", "ovs_interfaceid": "bbe0af68-c9d2-4b14-854b-b5355d9ef899", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:28:57 compute-0 nova_compute[188777]: 2026-02-19 20:28:57.357 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Releasing lock "refresh_cache-1cda3ab8-0805-4bcd-955c-996994fd3cb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:28:57 compute-0 nova_compute[188777]: 2026-02-19 20:28:57.358 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 19 20:28:57 compute-0 nova_compute[188777]: 2026-02-19 20:28:57.359 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:28:58 compute-0 nova_compute[188777]: 2026-02-19 20:28:58.109 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:28:58 compute-0 nova_compute[188777]: 2026-02-19 20:28:58.324 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:28:59 compute-0 podman[204724]: time="2026-02-19T20:28:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:28:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:28:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 19 20:28:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:28:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4375 "" "Go-http-client/1.1"
Feb 19 20:29:01 compute-0 openstack_network_exporter[207898]: ERROR   20:29:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:29:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:29:01 compute-0 openstack_network_exporter[207898]: ERROR   20:29:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:29:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:29:03 compute-0 nova_compute[188777]: 2026-02-19 20:29:03.111 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:29:03 compute-0 nova_compute[188777]: 2026-02-19 20:29:03.326 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:29:08 compute-0 nova_compute[188777]: 2026-02-19 20:29:08.114 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:29:08 compute-0 nova_compute[188777]: 2026-02-19 20:29:08.328 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:29:08 compute-0 podman[249814]: 2026-02-19 20:29:08.413370726 +0000 UTC m=+0.088916733 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 19 20:29:08 compute-0 podman[249813]: 2026-02-19 20:29:08.422092608 +0000 UTC m=+0.100194055 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-05T04:57:10Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, version=9.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, config_id=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, managed_by=edpm_ansible, release=1770267347, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 19 20:29:11 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Feb 19 20:29:12 compute-0 podman[249856]: 2026-02-19 20:29:12.407943584 +0000 UTC m=+0.088797790 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 19 20:29:13 compute-0 nova_compute[188777]: 2026-02-19 20:29:13.116 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:29:13 compute-0 nova_compute[188777]: 2026-02-19 20:29:13.332 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.142 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.143 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.143 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.144 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fa4f6728800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.144 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.145 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.145 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.145 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.146 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.146 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.146 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.146 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.146 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.146 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a390>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.146 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.147 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.147 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.147 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.147 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.148 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.148 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.148 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.148 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.149 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.149 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.150 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.150 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.150 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.151 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.152 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '5aaac42d-946d-4c6f-9bde-23b8b6613b59', 'name': 'test_0', 'flavor': {'id': '8030bc1a-9afb-4678-ac07-8b59a1275925', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e1a79c75-2fa3-410d-9c4c-91db3eeca51d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '59f01dee51a74ac1a9f82733f591827d', 'user_id': '9f5597a45dc34ee19bcfe938afde768f', 'hostId': 'fd9f80e206ee2256ddb900effab6d3e51f96886da6d1a8f886ddbab7', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.155 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1cda3ab8-0805-4bcd-955c-996994fd3cb4', 'name': 'vn-h4amqsx-jiq3zjubtpvr-5uw2ts4vboyi-vnf-jucboitrw5qp', 'flavor': {'id': '8030bc1a-9afb-4678-ac07-8b59a1275925', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e1a79c75-2fa3-410d-9c4c-91db3eeca51d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '59f01dee51a74ac1a9f82733f591827d', 'user_id': '9f5597a45dc34ee19bcfe938afde768f', 'hostId': 'fd9f80e206ee2256ddb900effab6d3e51f96886da6d1a8f886ddbab7', 'status': 'active', 'metadata': {'metering.server_group': '78adc0ea-8772-4283-8bd6-6dbdcecee09e'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.156 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.156 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.156 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.156 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.157 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-19T20:29:15.156670) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.162 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.169 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.170 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.170 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fa4f672a480>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.170 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.171 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fa4f672a180>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.171 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.171 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.171 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.171 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.171 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.171 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.172 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.172 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fa4f672bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.172 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.172 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.172 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.172 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.173 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.173 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.173 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.173 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fa4f672a270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.174 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.174 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.174 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.174 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.174 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.174 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-19T20:29:15.171459) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.175 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.outgoing.bytes volume: 2398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.175 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.175 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fa4f6728ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.176 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.176 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.176 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.176 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.176 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-19T20:29:15.172932) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.176 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-19T20:29:15.174743) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.177 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-19T20:29:15.176361) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.206 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.234 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.235 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.235 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fa4f672a300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.236 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.236 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.236 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.237 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.237 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.237 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-19T20:29:15.236971) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.238 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.239 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.239 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fa4f672ab70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.239 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.240 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.240 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.240 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.241 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-19T20:29:15.240657) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.274 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.274 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.274 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.302 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.303 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.304 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.306 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.306 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fa4f6728290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.307 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.307 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.307 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.307 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.308 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-19T20:29:15.307862) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.372 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.373 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.373 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 podman[249878]: 2026-02-19 20:29:15.374347514 +0000 UTC m=+0.063829321 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., release=1214.1726694543, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, io.buildah.version=1.29.0, version=9.4, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-container, config_id=kepler, io.openshift.tags=base rhel9, name=ubi9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9)
Feb 19 20:29:15 compute-0 podman[249879]: 2026-02-19 20:29:15.403364109 +0000 UTC m=+0.088782549 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi)
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.424 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.425 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.425 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.426 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.426 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fa4f69216a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.426 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.426 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.426 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.426 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.426 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-19T20:29:15.426615) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.427 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/cpu volume: 44160000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.427 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/cpu volume: 40350000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.428 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.428 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fa4f67286b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.428 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.428 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fa4f67283b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.428 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.428 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.428 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.429 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.429 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-19T20:29:15.429023) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.429 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.latency volume: 658474829 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.429 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.latency volume: 116712843 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.429 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.latency volume: 151528840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.430 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.latency volume: 683601533 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.430 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.latency volume: 109290795 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.430 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.latency volume: 110141141 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.430 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.431 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fa4f672a120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.431 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.431 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.431 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.431 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.431 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-19T20:29:15.431510) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.431 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.432 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.432 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.432 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fa4f672a1b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.432 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.432 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.432 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.432 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.433 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.433 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-19T20:29:15.432944) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.433 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.433 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.434 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fa4f6728410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.434 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.434 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.434 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.434 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.434 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-19T20:29:15.434440) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.434 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.434 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.435 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.435 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.435 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.435 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.436 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.436 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fa4f672a150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.436 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.436 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.436 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.436 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.436 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-19T20:29:15.436715) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.436 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.437 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.437 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.437 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fa4f6728470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.437 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.437 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.437 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.438 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.438 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-19T20:29:15.438004) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.438 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.438 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.438 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.438 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.439 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.439 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.439 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.439 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fa4f68f6030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.440 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.440 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.440 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.440 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.440 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.440 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.440 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-19T20:29:15.440294) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.441 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.441 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.441 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.441 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.442 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.442 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fa4f672ab10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.442 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.442 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.442 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.442 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.442 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-19T20:29:15.442599) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.442 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.443 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.443 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.443 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.443 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.443 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.444 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.444 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fa4f6728500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.444 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.444 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.444 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.444 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.445 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-19T20:29:15.444946) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.445 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.latency volume: 2413036213 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.445 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.latency volume: 10941917 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.445 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.445 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.latency volume: 1916273341 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.446 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.latency volume: 10533639 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.446 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.446 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.446 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fa4f672a0c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.447 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.447 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.447 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.447 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.447 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-19T20:29:15.447307) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.447 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.447 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.448 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.448 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fa4f6728560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.448 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.448 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.448 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.449 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.449 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-19T20:29:15.449022) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.449 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.449 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.449 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.449 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.450 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.450 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.450 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.450 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fa4f67285c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.451 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.451 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.451 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.451 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.451 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.451 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-19T20:29:15.451228) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.451 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fa4f6728620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.451 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.451 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.452 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.452 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.452 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.452 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fa4f672be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.452 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.452 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.453 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.453 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.453 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/memory.usage volume: 48.734375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.453 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-19T20:29:15.452060) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.453 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/memory.usage volume: 48.92578125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.453 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-19T20:29:15.453153) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.453 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.453 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fa4f672be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.454 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.454 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.454 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.454 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.454 15 DEBUG ceilometer.compute.pollsters [-] 5aaac42d-946d-4c6f-9bde-23b8b6613b59/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.454 15 DEBUG ceilometer.compute.pollsters [-] 1cda3ab8-0805-4bcd-955c-996994fd3cb4/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.454 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.455 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-19T20:29:15.454209) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.455 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.455 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.455 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.455 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.456 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.456 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.456 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.456 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.456 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.456 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.456 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.457 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.457 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.457 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.457 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.457 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.457 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.457 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.457 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.458 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.458 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.458 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.458 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.458 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.458 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:29:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:29:15.458 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:29:18 compute-0 nova_compute[188777]: 2026-02-19 20:29:18.120 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:29:18 compute-0 nova_compute[188777]: 2026-02-19 20:29:18.335 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:29:19 compute-0 podman[249917]: 2026-02-19 20:29:19.362080808 +0000 UTC m=+0.050342430 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 19 20:29:19 compute-0 sshd-session[249915]: Invalid user oracle from 160.187.147.124 port 51974
Feb 19 20:29:19 compute-0 sshd-session[249915]: Received disconnect from 160.187.147.124 port 51974:11: Bye Bye [preauth]
Feb 19 20:29:19 compute-0 sshd-session[249915]: Disconnected from invalid user oracle 160.187.147.124 port 51974 [preauth]
Feb 19 20:29:21 compute-0 podman[249940]: 2026-02-19 20:29:21.453699209 +0000 UTC m=+0.131263753 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, tcib_managed=true)
Feb 19 20:29:23 compute-0 nova_compute[188777]: 2026-02-19 20:29:23.122 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:29:23 compute-0 nova_compute[188777]: 2026-02-19 20:29:23.338 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:29:24 compute-0 podman[249960]: 2026-02-19 20:29:24.451716431 +0000 UTC m=+0.119124095 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible)
Feb 19 20:29:28 compute-0 nova_compute[188777]: 2026-02-19 20:29:28.124 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:29:28 compute-0 nova_compute[188777]: 2026-02-19 20:29:28.340 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:29:29 compute-0 podman[204724]: time="2026-02-19T20:29:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:29:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:29:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 19 20:29:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:29:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4379 "" "Go-http-client/1.1"
Feb 19 20:29:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:29:30.443 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:29:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:29:30.444 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:29:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:29:30.444 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:29:31 compute-0 openstack_network_exporter[207898]: ERROR   20:29:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:29:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:29:31 compute-0 openstack_network_exporter[207898]: ERROR   20:29:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:29:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:29:32 compute-0 sshd-session[248578]: Received disconnect from 38.102.83.176 port 33252:11: disconnected by user
Feb 19 20:29:32 compute-0 sshd-session[248578]: Disconnected from user zuul 38.102.83.176 port 33252
Feb 19 20:29:32 compute-0 sshd-session[248575]: pam_unix(sshd:session): session closed for user zuul
Feb 19 20:29:32 compute-0 systemd[1]: session-30.scope: Deactivated successfully.
Feb 19 20:29:32 compute-0 systemd[1]: session-30.scope: Consumed 3.680s CPU time.
Feb 19 20:29:32 compute-0 systemd-logind[810]: Session 30 logged out. Waiting for processes to exit.
Feb 19 20:29:32 compute-0 systemd-logind[810]: Removed session 30.
Feb 19 20:29:33 compute-0 nova_compute[188777]: 2026-02-19 20:29:33.126 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:29:33 compute-0 nova_compute[188777]: 2026-02-19 20:29:33.503 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:29:38 compute-0 nova_compute[188777]: 2026-02-19 20:29:38.129 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:29:38 compute-0 nova_compute[188777]: 2026-02-19 20:29:38.506 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:29:39 compute-0 podman[249987]: 2026-02-19 20:29:39.40199662 +0000 UTC m=+0.078472107 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 19 20:29:39 compute-0 podman[249986]: 2026-02-19 20:29:39.421766705 +0000 UTC m=+0.102903238 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-05T04:57:10Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, release=1770267347, maintainer=Red Hat, Inc., vcs-type=git, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.buildah.version=1.33.7)
Feb 19 20:29:40 compute-0 nova_compute[188777]: 2026-02-19 20:29:40.266 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:29:40 compute-0 nova_compute[188777]: 2026-02-19 20:29:40.266 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 19 20:29:42 compute-0 nova_compute[188777]: 2026-02-19 20:29:42.280 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:29:42 compute-0 nova_compute[188777]: 2026-02-19 20:29:42.281 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:29:43 compute-0 nova_compute[188777]: 2026-02-19 20:29:43.130 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:29:43 compute-0 podman[250031]: 2026-02-19 20:29:43.383843901 +0000 UTC m=+0.065326577 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 19 20:29:43 compute-0 nova_compute[188777]: 2026-02-19 20:29:43.508 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:29:46 compute-0 podman[250049]: 2026-02-19 20:29:46.391592015 +0000 UTC m=+0.079375496 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, config_id=kepler, name=ubi9, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.openshift.tags=base rhel9, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Feb 19 20:29:46 compute-0 podman[250050]: 2026-02-19 20:29:46.408653537 +0000 UTC m=+0.091334799 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 19 20:29:48 compute-0 nova_compute[188777]: 2026-02-19 20:29:48.133 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:29:48 compute-0 nova_compute[188777]: 2026-02-19 20:29:48.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:29:48 compute-0 nova_compute[188777]: 2026-02-19 20:29:48.511 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:29:49 compute-0 nova_compute[188777]: 2026-02-19 20:29:49.259 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:29:50 compute-0 nova_compute[188777]: 2026-02-19 20:29:50.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:29:50 compute-0 nova_compute[188777]: 2026-02-19 20:29:50.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:29:50 compute-0 podman[250087]: 2026-02-19 20:29:50.418716016 +0000 UTC m=+0.095953501 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 19 20:29:51 compute-0 nova_compute[188777]: 2026-02-19 20:29:51.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:29:51 compute-0 nova_compute[188777]: 2026-02-19 20:29:51.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:29:51 compute-0 nova_compute[188777]: 2026-02-19 20:29:51.289 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:29:51 compute-0 nova_compute[188777]: 2026-02-19 20:29:51.290 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:29:51 compute-0 nova_compute[188777]: 2026-02-19 20:29:51.290 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:29:51 compute-0 nova_compute[188777]: 2026-02-19 20:29:51.290 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:29:51 compute-0 nova_compute[188777]: 2026-02-19 20:29:51.385 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:29:51 compute-0 nova_compute[188777]: 2026-02-19 20:29:51.448 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:29:51 compute-0 nova_compute[188777]: 2026-02-19 20:29:51.449 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:29:51 compute-0 nova_compute[188777]: 2026-02-19 20:29:51.500 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:29:51 compute-0 nova_compute[188777]: 2026-02-19 20:29:51.502 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:29:51 compute-0 nova_compute[188777]: 2026-02-19 20:29:51.586 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:29:51 compute-0 nova_compute[188777]: 2026-02-19 20:29:51.588 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:29:51 compute-0 nova_compute[188777]: 2026-02-19 20:29:51.658 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:29:51 compute-0 nova_compute[188777]: 2026-02-19 20:29:51.666 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:29:51 compute-0 nova_compute[188777]: 2026-02-19 20:29:51.735 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:29:51 compute-0 nova_compute[188777]: 2026-02-19 20:29:51.737 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:29:51 compute-0 nova_compute[188777]: 2026-02-19 20:29:51.803 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:29:51 compute-0 nova_compute[188777]: 2026-02-19 20:29:51.805 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:29:51 compute-0 nova_compute[188777]: 2026-02-19 20:29:51.868 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:29:51 compute-0 nova_compute[188777]: 2026-02-19 20:29:51.870 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:29:51 compute-0 nova_compute[188777]: 2026-02-19 20:29:51.922 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4/disk.eph0 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:29:52 compute-0 nova_compute[188777]: 2026-02-19 20:29:52.319 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:29:52 compute-0 nova_compute[188777]: 2026-02-19 20:29:52.320 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4943MB free_disk=72.19874572753906GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:29:52 compute-0 nova_compute[188777]: 2026-02-19 20:29:52.320 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:29:52 compute-0 nova_compute[188777]: 2026-02-19 20:29:52.321 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:29:52 compute-0 podman[250135]: 2026-02-19 20:29:52.395828907 +0000 UTC m=+0.082333447 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0)
Feb 19 20:29:52 compute-0 nova_compute[188777]: 2026-02-19 20:29:52.508 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 5aaac42d-946d-4c6f-9bde-23b8b6613b59 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:29:52 compute-0 nova_compute[188777]: 2026-02-19 20:29:52.509 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 1cda3ab8-0805-4bcd-955c-996994fd3cb4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:29:52 compute-0 nova_compute[188777]: 2026-02-19 20:29:52.509 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:29:52 compute-0 nova_compute[188777]: 2026-02-19 20:29:52.509 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:29:52 compute-0 nova_compute[188777]: 2026-02-19 20:29:52.713 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:29:52 compute-0 nova_compute[188777]: 2026-02-19 20:29:52.725 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:29:52 compute-0 nova_compute[188777]: 2026-02-19 20:29:52.726 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:29:52 compute-0 nova_compute[188777]: 2026-02-19 20:29:52.726 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.406s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:29:53 compute-0 nova_compute[188777]: 2026-02-19 20:29:53.135 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:29:53 compute-0 nova_compute[188777]: 2026-02-19 20:29:53.514 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:29:54 compute-0 nova_compute[188777]: 2026-02-19 20:29:54.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:29:54 compute-0 nova_compute[188777]: 2026-02-19 20:29:54.264 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:29:54 compute-0 nova_compute[188777]: 2026-02-19 20:29:54.265 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 19 20:29:55 compute-0 nova_compute[188777]: 2026-02-19 20:29:55.123 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "refresh_cache-5aaac42d-946d-4c6f-9bde-23b8b6613b59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:29:55 compute-0 nova_compute[188777]: 2026-02-19 20:29:55.124 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquired lock "refresh_cache-5aaac42d-946d-4c6f-9bde-23b8b6613b59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:29:55 compute-0 nova_compute[188777]: 2026-02-19 20:29:55.124 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 19 20:29:55 compute-0 nova_compute[188777]: 2026-02-19 20:29:55.125 188781 DEBUG nova.objects.instance [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5aaac42d-946d-4c6f-9bde-23b8b6613b59 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:29:55 compute-0 podman[250153]: 2026-02-19 20:29:55.432808266 +0000 UTC m=+0.113722016 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Feb 19 20:29:56 compute-0 nova_compute[188777]: 2026-02-19 20:29:56.738 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Updating instance_info_cache with network_info: [{"id": "10027d6c-43cc-4a7c-be42-a49c8c914f25", "address": "fa:16:3e:e4:9e:14", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.193", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10027d6c-43", "ovs_interfaceid": "10027d6c-43cc-4a7c-be42-a49c8c914f25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:29:56 compute-0 nova_compute[188777]: 2026-02-19 20:29:56.755 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Releasing lock "refresh_cache-5aaac42d-946d-4c6f-9bde-23b8b6613b59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:29:56 compute-0 nova_compute[188777]: 2026-02-19 20:29:56.756 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 19 20:29:56 compute-0 nova_compute[188777]: 2026-02-19 20:29:56.757 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:29:56 compute-0 nova_compute[188777]: 2026-02-19 20:29:56.758 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:29:58 compute-0 nova_compute[188777]: 2026-02-19 20:29:58.138 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:29:58 compute-0 nova_compute[188777]: 2026-02-19 20:29:58.516 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:29:59 compute-0 podman[204724]: time="2026-02-19T20:29:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:29:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:29:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 19 20:29:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:29:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4373 "" "Go-http-client/1.1"
Feb 19 20:30:01 compute-0 openstack_network_exporter[207898]: ERROR   20:30:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:30:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:30:01 compute-0 openstack_network_exporter[207898]: ERROR   20:30:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:30:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:30:03 compute-0 nova_compute[188777]: 2026-02-19 20:30:03.141 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:03 compute-0 nova_compute[188777]: 2026-02-19 20:30:03.277 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:30:03 compute-0 nova_compute[188777]: 2026-02-19 20:30:03.278 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 19 20:30:03 compute-0 nova_compute[188777]: 2026-02-19 20:30:03.297 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 19 20:30:03 compute-0 nova_compute[188777]: 2026-02-19 20:30:03.520 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:08 compute-0 nova_compute[188777]: 2026-02-19 20:30:08.143 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:08 compute-0 nova_compute[188777]: 2026-02-19 20:30:08.524 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:10 compute-0 podman[250178]: 2026-02-19 20:30:10.396409398 +0000 UTC m=+0.077431564 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-05T04:57:10Z, managed_by=edpm_ansible, config_id=openstack_network_exporter, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-minimal-container, distribution-scope=public, name=ubi9/ubi-minimal, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, release=1770267347, build-date=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Feb 19 20:30:10 compute-0 podman[250179]: 2026-02-19 20:30:10.441255606 +0000 UTC m=+0.118355429 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 19 20:30:13 compute-0 nova_compute[188777]: 2026-02-19 20:30:13.146 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:13 compute-0 nova_compute[188777]: 2026-02-19 20:30:13.526 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:14 compute-0 podman[250226]: 2026-02-19 20:30:14.388291215 +0000 UTC m=+0.070552051 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127)
Feb 19 20:30:14 compute-0 sshd-session[250224]: Invalid user titu from 125.94.106.195 port 42828
Feb 19 20:30:14 compute-0 sshd-session[250224]: Received disconnect from 125.94.106.195 port 42828:11: Bye Bye [preauth]
Feb 19 20:30:14 compute-0 sshd-session[250224]: Disconnected from invalid user titu 125.94.106.195 port 42828 [preauth]
Feb 19 20:30:17 compute-0 podman[250244]: 2026-02-19 20:30:17.388082553 +0000 UTC m=+0.077317671 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, managed_by=edpm_ansible, architecture=x86_64, version=9.4, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, vendor=Red Hat, Inc., name=ubi9, container_name=kepler, distribution-scope=public, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Feb 19 20:30:17 compute-0 podman[250245]: 2026-02-19 20:30:17.407355704 +0000 UTC m=+0.090799581 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_managed=true, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 19 20:30:18 compute-0 nova_compute[188777]: 2026-02-19 20:30:18.148 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:18 compute-0 nova_compute[188777]: 2026-02-19 20:30:18.529 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:21 compute-0 podman[250282]: 2026-02-19 20:30:21.429006249 +0000 UTC m=+0.106491370 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 19 20:30:23 compute-0 nova_compute[188777]: 2026-02-19 20:30:23.151 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:23 compute-0 podman[250306]: 2026-02-19 20:30:23.429794758 +0000 UTC m=+0.112507318 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0)
Feb 19 20:30:23 compute-0 nova_compute[188777]: 2026-02-19 20:30:23.532 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:26 compute-0 podman[250327]: 2026-02-19 20:30:26.447835126 +0000 UTC m=+0.131609664 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 19 20:30:28 compute-0 nova_compute[188777]: 2026-02-19 20:30:28.153 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:28 compute-0 nova_compute[188777]: 2026-02-19 20:30:28.534 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:29 compute-0 podman[204724]: time="2026-02-19T20:30:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:30:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:30:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 19 20:30:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:30:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4382 "" "Go-http-client/1.1"
Feb 19 20:30:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:30:30.445 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:30:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:30:30.446 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:30:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:30:30.447 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:30:31 compute-0 openstack_network_exporter[207898]: ERROR   20:30:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:30:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:30:31 compute-0 openstack_network_exporter[207898]: ERROR   20:30:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:30:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:30:31 compute-0 nova_compute[188777]: 2026-02-19 20:30:31.738 188781 DEBUG oslo_concurrency.lockutils [None req-b604c647-dcbc-4134-8f6b-c82887680392 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "1cda3ab8-0805-4bcd-955c-996994fd3cb4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:30:31 compute-0 nova_compute[188777]: 2026-02-19 20:30:31.739 188781 DEBUG oslo_concurrency.lockutils [None req-b604c647-dcbc-4134-8f6b-c82887680392 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "1cda3ab8-0805-4bcd-955c-996994fd3cb4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:30:31 compute-0 nova_compute[188777]: 2026-02-19 20:30:31.739 188781 DEBUG oslo_concurrency.lockutils [None req-b604c647-dcbc-4134-8f6b-c82887680392 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "1cda3ab8-0805-4bcd-955c-996994fd3cb4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:30:31 compute-0 nova_compute[188777]: 2026-02-19 20:30:31.740 188781 DEBUG oslo_concurrency.lockutils [None req-b604c647-dcbc-4134-8f6b-c82887680392 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "1cda3ab8-0805-4bcd-955c-996994fd3cb4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:30:31 compute-0 nova_compute[188777]: 2026-02-19 20:30:31.740 188781 DEBUG oslo_concurrency.lockutils [None req-b604c647-dcbc-4134-8f6b-c82887680392 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "1cda3ab8-0805-4bcd-955c-996994fd3cb4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:30:31 compute-0 nova_compute[188777]: 2026-02-19 20:30:31.742 188781 INFO nova.compute.manager [None req-b604c647-dcbc-4134-8f6b-c82887680392 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Terminating instance
Feb 19 20:30:31 compute-0 nova_compute[188777]: 2026-02-19 20:30:31.744 188781 DEBUG nova.compute.manager [None req-b604c647-dcbc-4134-8f6b-c82887680392 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 19 20:30:31 compute-0 kernel: tapbbe0af68-c9 (unregistering): left promiscuous mode
Feb 19 20:30:31 compute-0 NetworkManager[57033]: <info>  [1771533031.7945] device (tapbbe0af68-c9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 19 20:30:31 compute-0 ovn_controller[98843]: 2026-02-19T20:30:31Z|00058|binding|INFO|Releasing lport bbe0af68-c9d2-4b14-854b-b5355d9ef899 from this chassis (sb_readonly=0)
Feb 19 20:30:31 compute-0 ovn_controller[98843]: 2026-02-19T20:30:31Z|00059|binding|INFO|Setting lport bbe0af68-c9d2-4b14-854b-b5355d9ef899 down in Southbound
Feb 19 20:30:31 compute-0 ovn_controller[98843]: 2026-02-19T20:30:31Z|00060|binding|INFO|Removing iface tapbbe0af68-c9 ovn-installed in OVS
Feb 19 20:30:31 compute-0 nova_compute[188777]: 2026-02-19 20:30:31.808 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:31 compute-0 nova_compute[188777]: 2026-02-19 20:30:31.816 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:30:31.818 108175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:50:54 192.168.0.76'], port_security=['fa:16:3e:2c:50:54 192.168.0.76'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-5pf5gh4amqsx-jiq3zjubtpvr-5uw2ts4vboyi-port-pfpt4fxi2gjn', 'neutron:cidrs': '192.168.0.76/24', 'neutron:device_id': '1cda3ab8-0805-4bcd-955c-996994fd3cb4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec82c3b7-5389-43ab-a939-ce6cd12f9681', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-5pf5gh4amqsx-jiq3zjubtpvr-5uw2ts4vboyi-port-pfpt4fxi2gjn', 'neutron:project_id': '59f01dee51a74ac1a9f82733f591827d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '46d7cf50-a73c-415e-96c4-398ffee7ce2d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.174', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61958255-2fb8-4c55-809a-ee04d4cf034a, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>], logical_port=bbe0af68-c9d2-4b14-854b-b5355d9ef899) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 19 20:30:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:30:31.819 108175 INFO neutron.agent.ovn.metadata.agent [-] Port bbe0af68-c9d2-4b14-854b-b5355d9ef899 in datapath ec82c3b7-5389-43ab-a939-ce6cd12f9681 unbound from our chassis
Feb 19 20:30:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:30:31.820 108175 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec82c3b7-5389-43ab-a939-ce6cd12f9681
Feb 19 20:30:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:30:31.836 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[dbea4607-c52a-4cd0-8a25-093c6d7be7c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:30:31 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Feb 19 20:30:31 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 1min 52.170s CPU time.
Feb 19 20:30:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:30:31.871 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[643e3c97-87dd-4cdf-b33a-63219873ffe5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:30:31 compute-0 systemd-machined[158158]: Machine qemu-4-instance-00000004 terminated.
Feb 19 20:30:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:30:31.875 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[4d2d01d8-cf61-4e05-8de5-d1e4f7f1699c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:30:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:30:31.903 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[586ad8d3-f121-469e-83e9-543761ccfdd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:30:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:30:31.919 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[c96e1b14-a995-4251-b8b8-8fbc18f1ce5d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec82c3b7-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8a:e7:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 16, 'rx_bytes': 658, 'tx_bytes': 864, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 16, 'rx_bytes': 658, 'tx_bytes': 864, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 348344, 'reachable_time': 32106, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250366, 'error': None, 'target': 'ovnmeta-ec82c3b7-5389-43ab-a939-ce6cd12f9681', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:30:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:30:31.934 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[a7bf1590-4e2d-4940-ab42-f9283e9dca05]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapec82c3b7-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 348361, 'tstamp': 348361}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250367, 'error': None, 'target': 'ovnmeta-ec82c3b7-5389-43ab-a939-ce6cd12f9681', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapec82c3b7-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 348365, 'tstamp': 348365}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250367, 'error': None, 'target': 'ovnmeta-ec82c3b7-5389-43ab-a939-ce6cd12f9681', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:30:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:30:31.935 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec82c3b7-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:30:31 compute-0 nova_compute[188777]: 2026-02-19 20:30:31.937 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:31 compute-0 nova_compute[188777]: 2026-02-19 20:30:31.943 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:30:31.943 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec82c3b7-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:30:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:30:31.944 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 19 20:30:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:30:31.944 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec82c3b7-50, col_values=(('external_ids', {'iface-id': 'a1c774de-4b7d-47b5-b88c-3f5d9b5c3dce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:30:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:30:31.945 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 19 20:30:31 compute-0 nova_compute[188777]: 2026-02-19 20:30:31.966 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:31 compute-0 nova_compute[188777]: 2026-02-19 20:30:31.974 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:31 compute-0 nova_compute[188777]: 2026-02-19 20:30:31.996 188781 DEBUG nova.compute.manager [req-2bd9ca58-a908-44f2-aded-4afab9079b42 req-a59ab298-d1e2-4287-8a94-964468a804f2 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Received event network-vif-unplugged-bbe0af68-c9d2-4b14-854b-b5355d9ef899 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:30:31 compute-0 nova_compute[188777]: 2026-02-19 20:30:31.997 188781 DEBUG oslo_concurrency.lockutils [req-2bd9ca58-a908-44f2-aded-4afab9079b42 req-a59ab298-d1e2-4287-8a94-964468a804f2 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "1cda3ab8-0805-4bcd-955c-996994fd3cb4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:30:31 compute-0 nova_compute[188777]: 2026-02-19 20:30:31.997 188781 DEBUG oslo_concurrency.lockutils [req-2bd9ca58-a908-44f2-aded-4afab9079b42 req-a59ab298-d1e2-4287-8a94-964468a804f2 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "1cda3ab8-0805-4bcd-955c-996994fd3cb4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:30:31 compute-0 nova_compute[188777]: 2026-02-19 20:30:31.998 188781 DEBUG oslo_concurrency.lockutils [req-2bd9ca58-a908-44f2-aded-4afab9079b42 req-a59ab298-d1e2-4287-8a94-964468a804f2 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "1cda3ab8-0805-4bcd-955c-996994fd3cb4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:30:31 compute-0 nova_compute[188777]: 2026-02-19 20:30:31.998 188781 DEBUG nova.compute.manager [req-2bd9ca58-a908-44f2-aded-4afab9079b42 req-a59ab298-d1e2-4287-8a94-964468a804f2 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] No waiting events found dispatching network-vif-unplugged-bbe0af68-c9d2-4b14-854b-b5355d9ef899 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 19 20:30:31 compute-0 nova_compute[188777]: 2026-02-19 20:30:31.999 188781 DEBUG nova.compute.manager [req-2bd9ca58-a908-44f2-aded-4afab9079b42 req-a59ab298-d1e2-4287-8a94-964468a804f2 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Received event network-vif-unplugged-bbe0af68-c9d2-4b14-854b-b5355d9ef899 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 19 20:30:32 compute-0 nova_compute[188777]: 2026-02-19 20:30:32.022 188781 INFO nova.virt.libvirt.driver [-] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Instance destroyed successfully.
Feb 19 20:30:32 compute-0 nova_compute[188777]: 2026-02-19 20:30:32.022 188781 DEBUG nova.objects.instance [None req-b604c647-dcbc-4134-8f6b-c82887680392 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lazy-loading 'resources' on Instance uuid 1cda3ab8-0805-4bcd-955c-996994fd3cb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:30:32 compute-0 nova_compute[188777]: 2026-02-19 20:30:32.052 188781 DEBUG nova.virt.libvirt.vif [None req-b604c647-dcbc-4134-8f6b-c82887680392 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-19T20:20:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-h4amqsx-jiq3zjubtpvr-5uw2ts4vboyi-vnf-jucboitrw5qp',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-h4amqsx-jiq3zjubtpvr-5uw2ts4vboyi-vnf-jucboitrw5qp',id=4,image_ref='e1a79c75-2fa3-410d-9c4c-91db3eeca51d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-19T20:20:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='78adc0ea-8772-4283-8bd6-6dbdcecee09e'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='59f01dee51a74ac1a9f82733f591827d',ramdisk_id='',reservation_id='r-1ppm1a12',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='e1a79c75-2fa3-410d-9c4c-91db3eeca51d',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-19T20:20:40Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01ODI1MTk3MjE1ODEyOTk4NzM3PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTU4MjUxOTcyMTU4MTI5OTg3Mzc9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTgyNTE5NzIxNTgxMjk5ODczNz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTU4MjUxOTcyMTU4MTI5OTg3Mzc9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01ODI1MTk3MjE1ODEyOTk4NzM3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01ODI1MTk3MjE1ODEyOTk4NzM3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Feb 19 20:30:32 compute-0 nova_compute[188777]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTgyNTE5NzIxNTgxMjk5ODczNz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTU4MjUxOTcyMTU4MTI5OTg3Mzc9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01ODI1MTk3MjE1ODEyOTk4NzM3PT0tLQo=',user_id='9f5597a45dc34ee19bcfe938afde768f',uuid=1cda3ab8-0805-4bcd-955c-996994fd3cb4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bbe0af68-c9d2-4b14-854b-b5355d9ef899", "address": "fa:16:3e:2c:50:54", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.76", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbe0af68-c9", "ovs_interfaceid": "bbe0af68-c9d2-4b14-854b-b5355d9ef899", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 19 20:30:32 compute-0 nova_compute[188777]: 2026-02-19 20:30:32.053 188781 DEBUG nova.network.os_vif_util [None req-b604c647-dcbc-4134-8f6b-c82887680392 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Converting VIF {"id": "bbe0af68-c9d2-4b14-854b-b5355d9ef899", "address": "fa:16:3e:2c:50:54", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.76", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbe0af68-c9", "ovs_interfaceid": "bbe0af68-c9d2-4b14-854b-b5355d9ef899", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 19 20:30:32 compute-0 nova_compute[188777]: 2026-02-19 20:30:32.053 188781 DEBUG nova.network.os_vif_util [None req-b604c647-dcbc-4134-8f6b-c82887680392 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2c:50:54,bridge_name='br-int',has_traffic_filtering=True,id=bbe0af68-c9d2-4b14-854b-b5355d9ef899,network=Network(ec82c3b7-5389-43ab-a939-ce6cd12f9681),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbbe0af68-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 19 20:30:32 compute-0 nova_compute[188777]: 2026-02-19 20:30:32.053 188781 DEBUG os_vif [None req-b604c647-dcbc-4134-8f6b-c82887680392 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2c:50:54,bridge_name='br-int',has_traffic_filtering=True,id=bbe0af68-c9d2-4b14-854b-b5355d9ef899,network=Network(ec82c3b7-5389-43ab-a939-ce6cd12f9681),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbbe0af68-c9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 19 20:30:32 compute-0 nova_compute[188777]: 2026-02-19 20:30:32.055 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:32 compute-0 nova_compute[188777]: 2026-02-19 20:30:32.055 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbbe0af68-c9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:30:32 compute-0 nova_compute[188777]: 2026-02-19 20:30:32.056 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:32 compute-0 nova_compute[188777]: 2026-02-19 20:30:32.058 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:32 compute-0 nova_compute[188777]: 2026-02-19 20:30:32.061 188781 INFO os_vif [None req-b604c647-dcbc-4134-8f6b-c82887680392 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2c:50:54,bridge_name='br-int',has_traffic_filtering=True,id=bbe0af68-c9d2-4b14-854b-b5355d9ef899,network=Network(ec82c3b7-5389-43ab-a939-ce6cd12f9681),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbbe0af68-c9')
Feb 19 20:30:32 compute-0 nova_compute[188777]: 2026-02-19 20:30:32.062 188781 INFO nova.virt.libvirt.driver [None req-b604c647-dcbc-4134-8f6b-c82887680392 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Deleting instance files /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4_del
Feb 19 20:30:32 compute-0 nova_compute[188777]: 2026-02-19 20:30:32.063 188781 INFO nova.virt.libvirt.driver [None req-b604c647-dcbc-4134-8f6b-c82887680392 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Deletion of /var/lib/nova/instances/1cda3ab8-0805-4bcd-955c-996994fd3cb4_del complete
Feb 19 20:30:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:30:32.118 108175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1e:ad:15', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:0d:ba:1d:25:53'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 19 20:30:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:30:32.118 108175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 19 20:30:32 compute-0 nova_compute[188777]: 2026-02-19 20:30:32.119 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:32 compute-0 nova_compute[188777]: 2026-02-19 20:30:32.145 188781 INFO nova.compute.manager [None req-b604c647-dcbc-4134-8f6b-c82887680392 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Took 0.40 seconds to destroy the instance on the hypervisor.
Feb 19 20:30:32 compute-0 nova_compute[188777]: 2026-02-19 20:30:32.146 188781 DEBUG oslo.service.loopingcall [None req-b604c647-dcbc-4134-8f6b-c82887680392 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 19 20:30:32 compute-0 nova_compute[188777]: 2026-02-19 20:30:32.147 188781 DEBUG nova.compute.manager [-] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 19 20:30:32 compute-0 nova_compute[188777]: 2026-02-19 20:30:32.148 188781 DEBUG nova.network.neutron [-] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 19 20:30:32 compute-0 rsyslogd[239379]: message too long (8192) with configured size 8096, begin of message is: 2026-02-19 20:30:32.052 188781 DEBUG nova.virt.libvirt.vif [None req-b604c647-dc [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 19 20:30:33 compute-0 nova_compute[188777]: 2026-02-19 20:30:33.156 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:33 compute-0 nova_compute[188777]: 2026-02-19 20:30:33.342 188781 DEBUG nova.compute.manager [req-becc138a-1b10-4071-a954-2cfafa5b2a2a req-1665cea3-b37f-45bd-9be8-169b401c7125 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Received event network-changed-bbe0af68-c9d2-4b14-854b-b5355d9ef899 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:30:33 compute-0 nova_compute[188777]: 2026-02-19 20:30:33.342 188781 DEBUG nova.compute.manager [req-becc138a-1b10-4071-a954-2cfafa5b2a2a req-1665cea3-b37f-45bd-9be8-169b401c7125 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Refreshing instance network info cache due to event network-changed-bbe0af68-c9d2-4b14-854b-b5355d9ef899. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 19 20:30:33 compute-0 nova_compute[188777]: 2026-02-19 20:30:33.343 188781 DEBUG oslo_concurrency.lockutils [req-becc138a-1b10-4071-a954-2cfafa5b2a2a req-1665cea3-b37f-45bd-9be8-169b401c7125 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "refresh_cache-1cda3ab8-0805-4bcd-955c-996994fd3cb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:30:33 compute-0 nova_compute[188777]: 2026-02-19 20:30:33.343 188781 DEBUG oslo_concurrency.lockutils [req-becc138a-1b10-4071-a954-2cfafa5b2a2a req-1665cea3-b37f-45bd-9be8-169b401c7125 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquired lock "refresh_cache-1cda3ab8-0805-4bcd-955c-996994fd3cb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:30:33 compute-0 nova_compute[188777]: 2026-02-19 20:30:33.343 188781 DEBUG nova.network.neutron [req-becc138a-1b10-4071-a954-2cfafa5b2a2a req-1665cea3-b37f-45bd-9be8-169b401c7125 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Refreshing network info cache for port bbe0af68-c9d2-4b14-854b-b5355d9ef899 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 19 20:30:33 compute-0 nova_compute[188777]: 2026-02-19 20:30:33.548 188781 INFO nova.network.neutron [req-becc138a-1b10-4071-a954-2cfafa5b2a2a req-1665cea3-b37f-45bd-9be8-169b401c7125 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Port bbe0af68-c9d2-4b14-854b-b5355d9ef899 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Feb 19 20:30:33 compute-0 nova_compute[188777]: 2026-02-19 20:30:33.549 188781 DEBUG nova.network.neutron [req-becc138a-1b10-4071-a954-2cfafa5b2a2a req-1665cea3-b37f-45bd-9be8-169b401c7125 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:30:33 compute-0 nova_compute[188777]: 2026-02-19 20:30:33.568 188781 DEBUG oslo_concurrency.lockutils [req-becc138a-1b10-4071-a954-2cfafa5b2a2a req-1665cea3-b37f-45bd-9be8-169b401c7125 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Releasing lock "refresh_cache-1cda3ab8-0805-4bcd-955c-996994fd3cb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:30:33 compute-0 nova_compute[188777]: 2026-02-19 20:30:33.586 188781 DEBUG nova.network.neutron [-] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:30:33 compute-0 nova_compute[188777]: 2026-02-19 20:30:33.601 188781 INFO nova.compute.manager [-] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Took 1.45 seconds to deallocate network for instance.
Feb 19 20:30:33 compute-0 nova_compute[188777]: 2026-02-19 20:30:33.645 188781 DEBUG oslo_concurrency.lockutils [None req-b604c647-dcbc-4134-8f6b-c82887680392 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:30:33 compute-0 nova_compute[188777]: 2026-02-19 20:30:33.646 188781 DEBUG oslo_concurrency.lockutils [None req-b604c647-dcbc-4134-8f6b-c82887680392 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:30:33 compute-0 nova_compute[188777]: 2026-02-19 20:30:33.739 188781 DEBUG nova.compute.provider_tree [None req-b604c647-dcbc-4134-8f6b-c82887680392 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:30:33 compute-0 nova_compute[188777]: 2026-02-19 20:30:33.753 188781 DEBUG nova.scheduler.client.report [None req-b604c647-dcbc-4134-8f6b-c82887680392 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:30:33 compute-0 nova_compute[188777]: 2026-02-19 20:30:33.778 188781 DEBUG oslo_concurrency.lockutils [None req-b604c647-dcbc-4134-8f6b-c82887680392 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.132s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:30:33 compute-0 nova_compute[188777]: 2026-02-19 20:30:33.802 188781 INFO nova.scheduler.client.report [None req-b604c647-dcbc-4134-8f6b-c82887680392 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Deleted allocations for instance 1cda3ab8-0805-4bcd-955c-996994fd3cb4
Feb 19 20:30:33 compute-0 nova_compute[188777]: 2026-02-19 20:30:33.884 188781 DEBUG oslo_concurrency.lockutils [None req-b604c647-dcbc-4134-8f6b-c82887680392 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "1cda3ab8-0805-4bcd-955c-996994fd3cb4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.145s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:30:34 compute-0 nova_compute[188777]: 2026-02-19 20:30:34.063 188781 DEBUG nova.compute.manager [req-e307d991-1a66-4561-b886-4a6ad77e3c4c req-09dd039b-7982-410b-97a0-cf51e73d4ece 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Received event network-vif-plugged-bbe0af68-c9d2-4b14-854b-b5355d9ef899 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:30:34 compute-0 nova_compute[188777]: 2026-02-19 20:30:34.063 188781 DEBUG oslo_concurrency.lockutils [req-e307d991-1a66-4561-b886-4a6ad77e3c4c req-09dd039b-7982-410b-97a0-cf51e73d4ece 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "1cda3ab8-0805-4bcd-955c-996994fd3cb4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:30:34 compute-0 nova_compute[188777]: 2026-02-19 20:30:34.064 188781 DEBUG oslo_concurrency.lockutils [req-e307d991-1a66-4561-b886-4a6ad77e3c4c req-09dd039b-7982-410b-97a0-cf51e73d4ece 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "1cda3ab8-0805-4bcd-955c-996994fd3cb4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:30:34 compute-0 nova_compute[188777]: 2026-02-19 20:30:34.064 188781 DEBUG oslo_concurrency.lockutils [req-e307d991-1a66-4561-b886-4a6ad77e3c4c req-09dd039b-7982-410b-97a0-cf51e73d4ece 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "1cda3ab8-0805-4bcd-955c-996994fd3cb4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:30:34 compute-0 nova_compute[188777]: 2026-02-19 20:30:34.065 188781 DEBUG nova.compute.manager [req-e307d991-1a66-4561-b886-4a6ad77e3c4c req-09dd039b-7982-410b-97a0-cf51e73d4ece 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] No waiting events found dispatching network-vif-plugged-bbe0af68-c9d2-4b14-854b-b5355d9ef899 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 19 20:30:34 compute-0 nova_compute[188777]: 2026-02-19 20:30:34.065 188781 WARNING nova.compute.manager [req-e307d991-1a66-4561-b886-4a6ad77e3c4c req-09dd039b-7982-410b-97a0-cf51e73d4ece 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Received unexpected event network-vif-plugged-bbe0af68-c9d2-4b14-854b-b5355d9ef899 for instance with vm_state deleted and task_state None.
Feb 19 20:30:35 compute-0 sshd-session[250389]: Invalid user n8n from 103.103.245.7 port 43168
Feb 19 20:30:35 compute-0 sshd-session[250389]: Received disconnect from 103.103.245.7 port 43168:11: Bye Bye [preauth]
Feb 19 20:30:35 compute-0 sshd-session[250389]: Disconnected from invalid user n8n 103.103.245.7 port 43168 [preauth]
Feb 19 20:30:37 compute-0 nova_compute[188777]: 2026-02-19 20:30:37.058 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:38 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:30:38.121 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e2fe6bb6-fad0-4563-8388-215a30f03e3f, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:30:38 compute-0 nova_compute[188777]: 2026-02-19 20:30:38.159 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:41 compute-0 podman[250392]: 2026-02-19 20:30:41.404577405 +0000 UTC m=+0.083657468 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 19 20:30:41 compute-0 podman[250391]: 2026-02-19 20:30:41.420628857 +0000 UTC m=+0.100284818 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, name=ubi9/ubi-minimal, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.7)
Feb 19 20:30:42 compute-0 nova_compute[188777]: 2026-02-19 20:30:42.059 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:43 compute-0 nova_compute[188777]: 2026-02-19 20:30:43.160 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:43 compute-0 nova_compute[188777]: 2026-02-19 20:30:43.284 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:30:43 compute-0 nova_compute[188777]: 2026-02-19 20:30:43.285 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:30:43 compute-0 nova_compute[188777]: 2026-02-19 20:30:43.309 188781 DEBUG oslo_concurrency.lockutils [None req-4d544689-07c3-4632-89f6-5fff307f2640 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "5aaac42d-946d-4c6f-9bde-23b8b6613b59" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:30:43 compute-0 nova_compute[188777]: 2026-02-19 20:30:43.310 188781 DEBUG oslo_concurrency.lockutils [None req-4d544689-07c3-4632-89f6-5fff307f2640 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "5aaac42d-946d-4c6f-9bde-23b8b6613b59" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:30:43 compute-0 nova_compute[188777]: 2026-02-19 20:30:43.312 188781 DEBUG oslo_concurrency.lockutils [None req-4d544689-07c3-4632-89f6-5fff307f2640 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "5aaac42d-946d-4c6f-9bde-23b8b6613b59-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:30:43 compute-0 nova_compute[188777]: 2026-02-19 20:30:43.313 188781 DEBUG oslo_concurrency.lockutils [None req-4d544689-07c3-4632-89f6-5fff307f2640 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "5aaac42d-946d-4c6f-9bde-23b8b6613b59-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:30:43 compute-0 nova_compute[188777]: 2026-02-19 20:30:43.314 188781 DEBUG oslo_concurrency.lockutils [None req-4d544689-07c3-4632-89f6-5fff307f2640 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "5aaac42d-946d-4c6f-9bde-23b8b6613b59-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:30:43 compute-0 nova_compute[188777]: 2026-02-19 20:30:43.316 188781 INFO nova.compute.manager [None req-4d544689-07c3-4632-89f6-5fff307f2640 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Terminating instance
Feb 19 20:30:43 compute-0 nova_compute[188777]: 2026-02-19 20:30:43.318 188781 DEBUG nova.compute.manager [None req-4d544689-07c3-4632-89f6-5fff307f2640 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 19 20:30:43 compute-0 kernel: tap10027d6c-43 (unregistering): left promiscuous mode
Feb 19 20:30:43 compute-0 NetworkManager[57033]: <info>  [1771533043.3660] device (tap10027d6c-43): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 19 20:30:43 compute-0 nova_compute[188777]: 2026-02-19 20:30:43.366 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:43 compute-0 ovn_controller[98843]: 2026-02-19T20:30:43Z|00061|binding|INFO|Releasing lport 10027d6c-43cc-4a7c-be42-a49c8c914f25 from this chassis (sb_readonly=0)
Feb 19 20:30:43 compute-0 nova_compute[188777]: 2026-02-19 20:30:43.387 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:43 compute-0 ovn_controller[98843]: 2026-02-19T20:30:43Z|00062|binding|INFO|Setting lport 10027d6c-43cc-4a7c-be42-a49c8c914f25 down in Southbound
Feb 19 20:30:43 compute-0 ovn_controller[98843]: 2026-02-19T20:30:43Z|00063|binding|INFO|Removing iface tap10027d6c-43 ovn-installed in OVS
Feb 19 20:30:43 compute-0 nova_compute[188777]: 2026-02-19 20:30:43.392 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:43 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:30:43.396 108175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:9e:14 192.168.0.193'], port_security=['fa:16:3e:e4:9e:14 192.168.0.193'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.193/24', 'neutron:device_id': '5aaac42d-946d-4c6f-9bde-23b8b6613b59', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec82c3b7-5389-43ab-a939-ce6cd12f9681', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '59f01dee51a74ac1a9f82733f591827d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '46d7cf50-a73c-415e-96c4-398ffee7ce2d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.219'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61958255-2fb8-4c55-809a-ee04d4cf034a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>], logical_port=10027d6c-43cc-4a7c-be42-a49c8c914f25) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 19 20:30:43 compute-0 nova_compute[188777]: 2026-02-19 20:30:43.397 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:43 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:30:43.398 108175 INFO neutron.agent.ovn.metadata.agent [-] Port 10027d6c-43cc-4a7c-be42-a49c8c914f25 in datapath ec82c3b7-5389-43ab-a939-ce6cd12f9681 unbound from our chassis
Feb 19 20:30:43 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:30:43.398 108175 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ec82c3b7-5389-43ab-a939-ce6cd12f9681, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 19 20:30:43 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:30:43.399 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[af86c43f-53c8-45fd-9422-2b640b023a44]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:30:43 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:30:43.400 108175 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ec82c3b7-5389-43ab-a939-ce6cd12f9681 namespace which is not needed anymore
Feb 19 20:30:43 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Feb 19 20:30:43 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 2min 54.279s CPU time.
Feb 19 20:30:43 compute-0 systemd-machined[158158]: Machine qemu-1-instance-00000001 terminated.
Feb 19 20:30:43 compute-0 nova_compute[188777]: 2026-02-19 20:30:43.547 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:43 compute-0 nova_compute[188777]: 2026-02-19 20:30:43.554 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:43 compute-0 nova_compute[188777]: 2026-02-19 20:30:43.562 188781 DEBUG nova.compute.manager [req-1dde8a16-1ead-4e51-b70e-66b0d012f144 req-d549c568-b0e4-4e40-af07-ab2a25592f48 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Received event network-vif-unplugged-10027d6c-43cc-4a7c-be42-a49c8c914f25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:30:43 compute-0 nova_compute[188777]: 2026-02-19 20:30:43.562 188781 DEBUG oslo_concurrency.lockutils [req-1dde8a16-1ead-4e51-b70e-66b0d012f144 req-d549c568-b0e4-4e40-af07-ab2a25592f48 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "5aaac42d-946d-4c6f-9bde-23b8b6613b59-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:30:43 compute-0 nova_compute[188777]: 2026-02-19 20:30:43.562 188781 DEBUG oslo_concurrency.lockutils [req-1dde8a16-1ead-4e51-b70e-66b0d012f144 req-d549c568-b0e4-4e40-af07-ab2a25592f48 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "5aaac42d-946d-4c6f-9bde-23b8b6613b59-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:30:43 compute-0 nova_compute[188777]: 2026-02-19 20:30:43.563 188781 DEBUG oslo_concurrency.lockutils [req-1dde8a16-1ead-4e51-b70e-66b0d012f144 req-d549c568-b0e4-4e40-af07-ab2a25592f48 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "5aaac42d-946d-4c6f-9bde-23b8b6613b59-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:30:43 compute-0 nova_compute[188777]: 2026-02-19 20:30:43.563 188781 DEBUG nova.compute.manager [req-1dde8a16-1ead-4e51-b70e-66b0d012f144 req-d549c568-b0e4-4e40-af07-ab2a25592f48 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] No waiting events found dispatching network-vif-unplugged-10027d6c-43cc-4a7c-be42-a49c8c914f25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 19 20:30:43 compute-0 nova_compute[188777]: 2026-02-19 20:30:43.563 188781 DEBUG nova.compute.manager [req-1dde8a16-1ead-4e51-b70e-66b0d012f144 req-d549c568-b0e4-4e40-af07-ab2a25592f48 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Received event network-vif-unplugged-10027d6c-43cc-4a7c-be42-a49c8c914f25 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 19 20:30:43 compute-0 neutron-haproxy-ovnmeta-ec82c3b7-5389-43ab-a939-ce6cd12f9681[242302]: [NOTICE]   (242315) : haproxy version is 2.8.14-c23fe91
Feb 19 20:30:43 compute-0 neutron-haproxy-ovnmeta-ec82c3b7-5389-43ab-a939-ce6cd12f9681[242302]: [NOTICE]   (242315) : path to executable is /usr/sbin/haproxy
Feb 19 20:30:43 compute-0 neutron-haproxy-ovnmeta-ec82c3b7-5389-43ab-a939-ce6cd12f9681[242302]: [WARNING]  (242315) : Exiting Master process...
Feb 19 20:30:43 compute-0 neutron-haproxy-ovnmeta-ec82c3b7-5389-43ab-a939-ce6cd12f9681[242302]: [WARNING]  (242315) : Exiting Master process...
Feb 19 20:30:43 compute-0 neutron-haproxy-ovnmeta-ec82c3b7-5389-43ab-a939-ce6cd12f9681[242302]: [ALERT]    (242315) : Current worker (242324) exited with code 143 (Terminated)
Feb 19 20:30:43 compute-0 neutron-haproxy-ovnmeta-ec82c3b7-5389-43ab-a939-ce6cd12f9681[242302]: [WARNING]  (242315) : All workers exited. Exiting... (0)
Feb 19 20:30:43 compute-0 systemd[1]: libpod-830ce657e1bdc94f9964229de9ef508d9426baa57e5efd6d846966a9e0ae99cd.scope: Deactivated successfully.
Feb 19 20:30:43 compute-0 nova_compute[188777]: 2026-02-19 20:30:43.599 188781 INFO nova.virt.libvirt.driver [-] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Instance destroyed successfully.
Feb 19 20:30:43 compute-0 nova_compute[188777]: 2026-02-19 20:30:43.600 188781 DEBUG nova.objects.instance [None req-4d544689-07c3-4632-89f6-5fff307f2640 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lazy-loading 'resources' on Instance uuid 5aaac42d-946d-4c6f-9bde-23b8b6613b59 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:30:43 compute-0 podman[250461]: 2026-02-19 20:30:43.602785679 +0000 UTC m=+0.071687765 container died 830ce657e1bdc94f9964229de9ef508d9426baa57e5efd6d846966a9e0ae99cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ec82c3b7-5389-43ab-a939-ce6cd12f9681, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 19 20:30:43 compute-0 nova_compute[188777]: 2026-02-19 20:30:43.612 188781 DEBUG nova.virt.libvirt.vif [None req-4d544689-07c3-4632-89f6-5fff307f2640 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-19T20:12:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='e1a79c75-2fa3-410d-9c4c-91db3eeca51d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-19T20:12:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='59f01dee51a74ac1a9f82733f591827d',ramdisk_id='',reservation_id='r-gl1rzcqo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='e1a79c75-2fa3-410d-9c4c-91db3eeca51d',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-19T20:12:49Z,user_data=None,user_id='9f5597a45dc34ee19bcfe938afde768f',uuid=5aaac42d-946d-4c6f-9bde-23b8b6613b59,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "10027d6c-43cc-4a7c-be42-a49c8c914f25", "address": "fa:16:3e:e4:9e:14", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.193", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10027d6c-43", "ovs_interfaceid": "10027d6c-43cc-4a7c-be42-a49c8c914f25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 19 20:30:43 compute-0 nova_compute[188777]: 2026-02-19 20:30:43.613 188781 DEBUG nova.network.os_vif_util [None req-4d544689-07c3-4632-89f6-5fff307f2640 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Converting VIF {"id": "10027d6c-43cc-4a7c-be42-a49c8c914f25", "address": "fa:16:3e:e4:9e:14", "network": {"id": "ec82c3b7-5389-43ab-a939-ce6cd12f9681", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.193", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59f01dee51a74ac1a9f82733f591827d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10027d6c-43", "ovs_interfaceid": "10027d6c-43cc-4a7c-be42-a49c8c914f25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 19 20:30:43 compute-0 nova_compute[188777]: 2026-02-19 20:30:43.613 188781 DEBUG nova.network.os_vif_util [None req-4d544689-07c3-4632-89f6-5fff307f2640 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e4:9e:14,bridge_name='br-int',has_traffic_filtering=True,id=10027d6c-43cc-4a7c-be42-a49c8c914f25,network=Network(ec82c3b7-5389-43ab-a939-ce6cd12f9681),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10027d6c-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 19 20:30:43 compute-0 nova_compute[188777]: 2026-02-19 20:30:43.613 188781 DEBUG os_vif [None req-4d544689-07c3-4632-89f6-5fff307f2640 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e4:9e:14,bridge_name='br-int',has_traffic_filtering=True,id=10027d6c-43cc-4a7c-be42-a49c8c914f25,network=Network(ec82c3b7-5389-43ab-a939-ce6cd12f9681),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10027d6c-43') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 19 20:30:43 compute-0 nova_compute[188777]: 2026-02-19 20:30:43.615 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:43 compute-0 nova_compute[188777]: 2026-02-19 20:30:43.615 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap10027d6c-43, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:30:43 compute-0 nova_compute[188777]: 2026-02-19 20:30:43.617 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:43 compute-0 nova_compute[188777]: 2026-02-19 20:30:43.618 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:43 compute-0 nova_compute[188777]: 2026-02-19 20:30:43.620 188781 INFO os_vif [None req-4d544689-07c3-4632-89f6-5fff307f2640 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e4:9e:14,bridge_name='br-int',has_traffic_filtering=True,id=10027d6c-43cc-4a7c-be42-a49c8c914f25,network=Network(ec82c3b7-5389-43ab-a939-ce6cd12f9681),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10027d6c-43')
Feb 19 20:30:43 compute-0 nova_compute[188777]: 2026-02-19 20:30:43.621 188781 INFO nova.virt.libvirt.driver [None req-4d544689-07c3-4632-89f6-5fff307f2640 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Deleting instance files /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59_del
Feb 19 20:30:43 compute-0 nova_compute[188777]: 2026-02-19 20:30:43.621 188781 INFO nova.virt.libvirt.driver [None req-4d544689-07c3-4632-89f6-5fff307f2640 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Deletion of /var/lib/nova/instances/5aaac42d-946d-4c6f-9bde-23b8b6613b59_del complete
Feb 19 20:30:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-830ce657e1bdc94f9964229de9ef508d9426baa57e5efd6d846966a9e0ae99cd-userdata-shm.mount: Deactivated successfully.
Feb 19 20:30:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f506d57a04a6731c7b4448db0995098917d3eb5a51f85b9ad86f8c5480cf39e-merged.mount: Deactivated successfully.
Feb 19 20:30:43 compute-0 podman[250461]: 2026-02-19 20:30:43.665418082 +0000 UTC m=+0.134320148 container cleanup 830ce657e1bdc94f9964229de9ef508d9426baa57e5efd6d846966a9e0ae99cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ec82c3b7-5389-43ab-a939-ce6cd12f9681, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127)
Feb 19 20:30:43 compute-0 systemd[1]: libpod-conmon-830ce657e1bdc94f9964229de9ef508d9426baa57e5efd6d846966a9e0ae99cd.scope: Deactivated successfully.
Feb 19 20:30:43 compute-0 nova_compute[188777]: 2026-02-19 20:30:43.677 188781 INFO nova.compute.manager [None req-4d544689-07c3-4632-89f6-5fff307f2640 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Took 0.36 seconds to destroy the instance on the hypervisor.
Feb 19 20:30:43 compute-0 nova_compute[188777]: 2026-02-19 20:30:43.678 188781 DEBUG oslo.service.loopingcall [None req-4d544689-07c3-4632-89f6-5fff307f2640 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 19 20:30:43 compute-0 nova_compute[188777]: 2026-02-19 20:30:43.678 188781 DEBUG nova.compute.manager [-] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 19 20:30:43 compute-0 nova_compute[188777]: 2026-02-19 20:30:43.679 188781 DEBUG nova.network.neutron [-] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 19 20:30:43 compute-0 podman[250510]: 2026-02-19 20:30:43.730839921 +0000 UTC m=+0.045708185 container remove 830ce657e1bdc94f9964229de9ef508d9426baa57e5efd6d846966a9e0ae99cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ec82c3b7-5389-43ab-a939-ce6cd12f9681, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 19 20:30:43 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:30:43.736 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[58d0a13e-4f4f-4d3d-9692-1d3423bd6404]: (4, ('Thu Feb 19 08:30:43 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ec82c3b7-5389-43ab-a939-ce6cd12f9681 (830ce657e1bdc94f9964229de9ef508d9426baa57e5efd6d846966a9e0ae99cd)\n830ce657e1bdc94f9964229de9ef508d9426baa57e5efd6d846966a9e0ae99cd\nThu Feb 19 08:30:43 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ec82c3b7-5389-43ab-a939-ce6cd12f9681 (830ce657e1bdc94f9964229de9ef508d9426baa57e5efd6d846966a9e0ae99cd)\n830ce657e1bdc94f9964229de9ef508d9426baa57e5efd6d846966a9e0ae99cd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:30:43 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:30:43.739 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[52f69477-0432-4c2e-a2ee-acbe32720efb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:30:43 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:30:43.742 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec82c3b7-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:30:43 compute-0 kernel: tapec82c3b7-50: left promiscuous mode
Feb 19 20:30:43 compute-0 nova_compute[188777]: 2026-02-19 20:30:43.745 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:43 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:30:43.749 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[d04fbecf-42ef-442e-bd3d-c36a77879bbb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:30:43 compute-0 nova_compute[188777]: 2026-02-19 20:30:43.751 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:43 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:30:43.764 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[27ed89da-b87f-460b-8c89-d06a22ed99e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:30:43 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:30:43.765 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[76ce7db2-e9cd-4bfa-bae7-5d25d5db0f42]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:30:43 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:30:43.779 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[54c7d8b9-6250-47ff-91d3-5a8c144f4aa2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 348334, 'reachable_time': 32616, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250526, 'error': None, 'target': 'ovnmeta-ec82c3b7-5389-43ab-a939-ce6cd12f9681', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:30:43 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:30:43.792 108698 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ec82c3b7-5389-43ab-a939-ce6cd12f9681 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 19 20:30:43 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:30:43.793 108698 DEBUG oslo.privsep.daemon [-] privsep: reply[151f9fa4-8292-4f4e-91a4-1b491637d0e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:30:43 compute-0 systemd[1]: run-netns-ovnmeta\x2dec82c3b7\x2d5389\x2d43ab\x2da939\x2dce6cd12f9681.mount: Deactivated successfully.
Feb 19 20:30:43 compute-0 sshd-session[250435]: Invalid user n8n from 103.250.11.249 port 43310
Feb 19 20:30:44 compute-0 sshd-session[250435]: Received disconnect from 103.250.11.249 port 43310:11: Bye Bye [preauth]
Feb 19 20:30:44 compute-0 sshd-session[250435]: Disconnected from invalid user n8n 103.250.11.249 port 43310 [preauth]
Feb 19 20:30:44 compute-0 nova_compute[188777]: 2026-02-19 20:30:44.319 188781 DEBUG nova.network.neutron [-] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:30:44 compute-0 nova_compute[188777]: 2026-02-19 20:30:44.344 188781 INFO nova.compute.manager [-] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Took 0.67 seconds to deallocate network for instance.
Feb 19 20:30:44 compute-0 nova_compute[188777]: 2026-02-19 20:30:44.379 188781 DEBUG oslo_concurrency.lockutils [None req-4d544689-07c3-4632-89f6-5fff307f2640 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:30:44 compute-0 nova_compute[188777]: 2026-02-19 20:30:44.379 188781 DEBUG oslo_concurrency.lockutils [None req-4d544689-07c3-4632-89f6-5fff307f2640 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:30:44 compute-0 nova_compute[188777]: 2026-02-19 20:30:44.440 188781 DEBUG nova.compute.provider_tree [None req-4d544689-07c3-4632-89f6-5fff307f2640 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:30:44 compute-0 nova_compute[188777]: 2026-02-19 20:30:44.455 188781 DEBUG nova.scheduler.client.report [None req-4d544689-07c3-4632-89f6-5fff307f2640 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:30:44 compute-0 nova_compute[188777]: 2026-02-19 20:30:44.479 188781 DEBUG oslo_concurrency.lockutils [None req-4d544689-07c3-4632-89f6-5fff307f2640 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.099s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:30:44 compute-0 nova_compute[188777]: 2026-02-19 20:30:44.502 188781 INFO nova.scheduler.client.report [None req-4d544689-07c3-4632-89f6-5fff307f2640 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Deleted allocations for instance 5aaac42d-946d-4c6f-9bde-23b8b6613b59
Feb 19 20:30:44 compute-0 nova_compute[188777]: 2026-02-19 20:30:44.564 188781 DEBUG oslo_concurrency.lockutils [None req-4d544689-07c3-4632-89f6-5fff307f2640 9f5597a45dc34ee19bcfe938afde768f 59f01dee51a74ac1a9f82733f591827d - - default default] Lock "5aaac42d-946d-4c6f-9bde-23b8b6613b59" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.253s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:30:44 compute-0 podman[250530]: 2026-02-19 20:30:44.822098758 +0000 UTC m=+0.136168246 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 20:30:45 compute-0 sshd-session[250528]: Received disconnect from 83.235.16.111 port 58624:11: Bye Bye [preauth]
Feb 19 20:30:45 compute-0 sshd-session[250528]: Disconnected from authenticating user root 83.235.16.111 port 58624 [preauth]
Feb 19 20:30:45 compute-0 nova_compute[188777]: 2026-02-19 20:30:45.678 188781 DEBUG nova.compute.manager [req-3dd800b8-939a-4f82-89e5-b1bf4f5ee74f req-57f8d0c9-998c-4a0c-9e82-0e0f5283e536 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Received event network-vif-plugged-10027d6c-43cc-4a7c-be42-a49c8c914f25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:30:45 compute-0 nova_compute[188777]: 2026-02-19 20:30:45.679 188781 DEBUG oslo_concurrency.lockutils [req-3dd800b8-939a-4f82-89e5-b1bf4f5ee74f req-57f8d0c9-998c-4a0c-9e82-0e0f5283e536 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "5aaac42d-946d-4c6f-9bde-23b8b6613b59-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:30:45 compute-0 nova_compute[188777]: 2026-02-19 20:30:45.679 188781 DEBUG oslo_concurrency.lockutils [req-3dd800b8-939a-4f82-89e5-b1bf4f5ee74f req-57f8d0c9-998c-4a0c-9e82-0e0f5283e536 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "5aaac42d-946d-4c6f-9bde-23b8b6613b59-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:30:45 compute-0 nova_compute[188777]: 2026-02-19 20:30:45.679 188781 DEBUG oslo_concurrency.lockutils [req-3dd800b8-939a-4f82-89e5-b1bf4f5ee74f req-57f8d0c9-998c-4a0c-9e82-0e0f5283e536 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "5aaac42d-946d-4c6f-9bde-23b8b6613b59-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:30:45 compute-0 nova_compute[188777]: 2026-02-19 20:30:45.680 188781 DEBUG nova.compute.manager [req-3dd800b8-939a-4f82-89e5-b1bf4f5ee74f req-57f8d0c9-998c-4a0c-9e82-0e0f5283e536 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] No waiting events found dispatching network-vif-plugged-10027d6c-43cc-4a7c-be42-a49c8c914f25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 19 20:30:45 compute-0 nova_compute[188777]: 2026-02-19 20:30:45.680 188781 WARNING nova.compute.manager [req-3dd800b8-939a-4f82-89e5-b1bf4f5ee74f req-57f8d0c9-998c-4a0c-9e82-0e0f5283e536 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Received unexpected event network-vif-plugged-10027d6c-43cc-4a7c-be42-a49c8c914f25 for instance with vm_state deleted and task_state None.
Feb 19 20:30:45 compute-0 nova_compute[188777]: 2026-02-19 20:30:45.680 188781 DEBUG nova.compute.manager [req-3dd800b8-939a-4f82-89e5-b1bf4f5ee74f req-57f8d0c9-998c-4a0c-9e82-0e0f5283e536 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Received event network-vif-deleted-10027d6c-43cc-4a7c-be42-a49c8c914f25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:30:47 compute-0 nova_compute[188777]: 2026-02-19 20:30:47.018 188781 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1771533032.0171494, 1cda3ab8-0805-4bcd-955c-996994fd3cb4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:30:47 compute-0 nova_compute[188777]: 2026-02-19 20:30:47.019 188781 INFO nova.compute.manager [-] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] VM Stopped (Lifecycle Event)
Feb 19 20:30:47 compute-0 nova_compute[188777]: 2026-02-19 20:30:47.039 188781 DEBUG nova.compute.manager [None req-3e7bad4b-76c4-47d1-b8ea-5f91b12f4401 - - - - - -] [instance: 1cda3ab8-0805-4bcd-955c-996994fd3cb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:30:48 compute-0 nova_compute[188777]: 2026-02-19 20:30:48.163 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:48 compute-0 podman[250551]: 2026-02-19 20:30:48.40868166 +0000 UTC m=+0.080156620 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 20:30:48 compute-0 podman[250550]: 2026-02-19 20:30:48.435868357 +0000 UTC m=+0.100021228 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, io.buildah.version=1.29.0, vcs-type=git, build-date=2024-09-18T21:23:30, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-container, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, architecture=x86_64, distribution-scope=public, config_id=kepler, vendor=Red Hat, Inc.)
Feb 19 20:30:48 compute-0 nova_compute[188777]: 2026-02-19 20:30:48.617 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:49 compute-0 nova_compute[188777]: 2026-02-19 20:30:49.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:30:50 compute-0 nova_compute[188777]: 2026-02-19 20:30:50.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:30:50 compute-0 nova_compute[188777]: 2026-02-19 20:30:50.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:30:51 compute-0 nova_compute[188777]: 2026-02-19 20:30:51.259 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:30:51 compute-0 nova_compute[188777]: 2026-02-19 20:30:51.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:30:51 compute-0 nova_compute[188777]: 2026-02-19 20:30:51.289 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:30:51 compute-0 nova_compute[188777]: 2026-02-19 20:30:51.289 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:30:51 compute-0 nova_compute[188777]: 2026-02-19 20:30:51.290 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:30:51 compute-0 nova_compute[188777]: 2026-02-19 20:30:51.290 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:30:51 compute-0 nova_compute[188777]: 2026-02-19 20:30:51.602 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:30:51 compute-0 nova_compute[188777]: 2026-02-19 20:30:51.603 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5354MB free_disk=72.24295806884766GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:30:51 compute-0 nova_compute[188777]: 2026-02-19 20:30:51.603 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:30:51 compute-0 nova_compute[188777]: 2026-02-19 20:30:51.604 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:30:51 compute-0 nova_compute[188777]: 2026-02-19 20:30:51.654 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:30:51 compute-0 nova_compute[188777]: 2026-02-19 20:30:51.655 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:30:51 compute-0 nova_compute[188777]: 2026-02-19 20:30:51.679 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:30:51 compute-0 nova_compute[188777]: 2026-02-19 20:30:51.693 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:30:51 compute-0 nova_compute[188777]: 2026-02-19 20:30:51.708 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:30:51 compute-0 nova_compute[188777]: 2026-02-19 20:30:51.708 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.104s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:30:52 compute-0 podman[250590]: 2026-02-19 20:30:52.415273441 +0000 UTC m=+0.093435143 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 19 20:30:52 compute-0 nova_compute[188777]: 2026-02-19 20:30:52.709 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:30:53 compute-0 nova_compute[188777]: 2026-02-19 20:30:53.165 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:53 compute-0 nova_compute[188777]: 2026-02-19 20:30:53.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:30:53 compute-0 nova_compute[188777]: 2026-02-19 20:30:53.264 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:30:53 compute-0 nova_compute[188777]: 2026-02-19 20:30:53.280 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 19 20:30:53 compute-0 nova_compute[188777]: 2026-02-19 20:30:53.619 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:54 compute-0 podman[250613]: 2026-02-19 20:30:54.441272437 +0000 UTC m=+0.123836541 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, tcib_managed=true, config_id=ceilometer_agent_compute)
Feb 19 20:30:55 compute-0 nova_compute[188777]: 2026-02-19 20:30:55.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:30:55 compute-0 nova_compute[188777]: 2026-02-19 20:30:55.280 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:30:57 compute-0 podman[250632]: 2026-02-19 20:30:57.412153836 +0000 UTC m=+0.098797751 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 19 20:30:58 compute-0 nova_compute[188777]: 2026-02-19 20:30:58.170 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:58 compute-0 nova_compute[188777]: 2026-02-19 20:30:58.596 188781 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1771533043.5951169, 5aaac42d-946d-4c6f-9bde-23b8b6613b59 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:30:58 compute-0 nova_compute[188777]: 2026-02-19 20:30:58.597 188781 INFO nova.compute.manager [-] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] VM Stopped (Lifecycle Event)
Feb 19 20:30:58 compute-0 nova_compute[188777]: 2026-02-19 20:30:58.618 188781 DEBUG nova.compute.manager [None req-095e9eea-e5c6-41f2-a9e9-a5cf9d12715b - - - - - -] [instance: 5aaac42d-946d-4c6f-9bde-23b8b6613b59] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:30:58 compute-0 nova_compute[188777]: 2026-02-19 20:30:58.622 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:30:59 compute-0 podman[204724]: time="2026-02-19T20:30:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:30:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:30:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28006 "" "Go-http-client/1.1"
Feb 19 20:30:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:30:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3912 "" "Go-http-client/1.1"
Feb 19 20:31:01 compute-0 openstack_network_exporter[207898]: ERROR   20:31:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:31:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:31:01 compute-0 openstack_network_exporter[207898]: ERROR   20:31:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:31:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:31:03 compute-0 nova_compute[188777]: 2026-02-19 20:31:03.172 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:31:03 compute-0 nova_compute[188777]: 2026-02-19 20:31:03.624 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:31:04 compute-0 sshd-session[250657]: Received disconnect from 103.179.56.24 port 56224:11: Bye Bye [preauth]
Feb 19 20:31:04 compute-0 sshd-session[250657]: Disconnected from authenticating user root 103.179.56.24 port 56224 [preauth]
Feb 19 20:31:08 compute-0 nova_compute[188777]: 2026-02-19 20:31:08.174 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:31:08 compute-0 nova_compute[188777]: 2026-02-19 20:31:08.627 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:31:12 compute-0 podman[250660]: 2026-02-19 20:31:12.399130478 +0000 UTC m=+0.068222837 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 19 20:31:12 compute-0 podman[250659]: 2026-02-19 20:31:12.427371399 +0000 UTC m=+0.109327759 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-type=git, config_id=openstack_network_exporter, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, build-date=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., version=9.7, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c)
Feb 19 20:31:13 compute-0 nova_compute[188777]: 2026-02-19 20:31:13.179 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:31:13 compute-0 nova_compute[188777]: 2026-02-19 20:31:13.632 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:31:14 compute-0 ovn_controller[98843]: 2026-02-19T20:31:14Z|00064|memory_trim|INFO|Detected inactivity (last active 30015 ms ago): trimming memory
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.150 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.150 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.150 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.151 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fa4f6728800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.151 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.151 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.151 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.152 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.152 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.152 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.152 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.153 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.153 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fa4f672a480>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.153 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.154 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fa4f672a180>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.154 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.154 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fa4f672bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.154 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.154 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fa4f672a270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.154 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.154 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fa4f6728ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.154 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.154 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fa4f672a300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.154 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.155 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fa4f672ab70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.155 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.153 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'power.state': [], 'network.outgoing.bytes.delta': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'power.state': [], 'network.outgoing.bytes.delta': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a390>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'power.state': [], 'network.outgoing.bytes.delta': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'power.state': [], 'network.outgoing.bytes.delta': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'power.state': [], 'network.outgoing.bytes.delta': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.156 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'power.state': [], 'network.outgoing.bytes.delta': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.155 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fa4f6728290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.156 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.156 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fa4f69216a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.156 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.156 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'power.state': [], 'network.outgoing.bytes.delta': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.156 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'power.state': [], 'network.outgoing.bytes.delta': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.156 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fa4f67286b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.157 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.157 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fa4f67283b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.157 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.157 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fa4f672a120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.157 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.157 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fa4f672a1b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.157 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.157 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fa4f6728410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.158 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.157 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'power.state': [], 'network.outgoing.bytes.delta': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'cpu': [], 'network.outgoing.bytes.rate': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.158 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'power.state': [], 'network.outgoing.bytes.delta': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'cpu': [], 'network.outgoing.bytes.rate': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.158 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'power.state': [], 'network.outgoing.bytes.delta': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'cpu': [], 'network.outgoing.bytes.rate': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.158 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'power.state': [], 'network.outgoing.bytes.delta': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'cpu': [], 'network.outgoing.bytes.rate': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.158 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fa4f672a150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.159 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.159 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fa4f6728470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.159 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.159 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fa4f68f6030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.158 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'power.state': [], 'network.outgoing.bytes.delta': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'cpu': [], 'network.outgoing.bytes.rate': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'disk.device.read.requests': [], 'network.incoming.packets': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.159 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'power.state': [], 'network.outgoing.bytes.delta': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'cpu': [], 'network.outgoing.bytes.rate': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'disk.device.read.requests': [], 'network.incoming.packets': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.159 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'power.state': [], 'network.outgoing.bytes.delta': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'cpu': [], 'network.outgoing.bytes.rate': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'disk.device.read.requests': [], 'network.incoming.packets': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.159 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'power.state': [], 'network.outgoing.bytes.delta': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'cpu': [], 'network.outgoing.bytes.rate': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'disk.device.read.requests': [], 'network.incoming.packets': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.160 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'power.state': [], 'network.outgoing.bytes.delta': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'cpu': [], 'network.outgoing.bytes.rate': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'disk.device.read.requests': [], 'network.incoming.packets': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.160 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'power.state': [], 'network.outgoing.bytes.delta': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'cpu': [], 'network.outgoing.bytes.rate': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'disk.device.read.requests': [], 'network.incoming.packets': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.159 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.160 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fa4f672ab10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.160 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.160 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fa4f6728500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.160 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.160 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fa4f672a0c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.160 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.160 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fa4f6728560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.160 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.161 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fa4f67285c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.161 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.161 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fa4f6728620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.161 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.161 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fa4f672be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.161 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.161 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fa4f672be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.161 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.161 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.162 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.162 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.162 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.162 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.162 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.162 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.162 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.162 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.162 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.162 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.162 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.162 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.162 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.163 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.163 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.163 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.163 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.163 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.163 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.163 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.163 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.163 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.163 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.163 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:31:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:31:15.163 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:31:15 compute-0 podman[250703]: 2026-02-19 20:31:15.393181849 +0000 UTC m=+0.083549125 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 20:31:18 compute-0 nova_compute[188777]: 2026-02-19 20:31:18.181 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:31:18 compute-0 nova_compute[188777]: 2026-02-19 20:31:18.635 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:31:19 compute-0 podman[250722]: 2026-02-19 20:31:19.372255803 +0000 UTC m=+0.060699203 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi)
Feb 19 20:31:19 compute-0 podman[250721]: 2026-02-19 20:31:19.382900936 +0000 UTC m=+0.073873614 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, config_id=kepler, container_name=kepler, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.openshift.expose-services=, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64)
Feb 19 20:31:23 compute-0 nova_compute[188777]: 2026-02-19 20:31:23.184 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:31:23 compute-0 podman[250760]: 2026-02-19 20:31:23.308781543 +0000 UTC m=+0.091944256 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 19 20:31:23 compute-0 nova_compute[188777]: 2026-02-19 20:31:23.637 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:31:25 compute-0 podman[250784]: 2026-02-19 20:31:25.363352838 +0000 UTC m=+0.052856239 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Feb 19 20:31:28 compute-0 nova_compute[188777]: 2026-02-19 20:31:28.187 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:31:28 compute-0 podman[250804]: 2026-02-19 20:31:28.445317031 +0000 UTC m=+0.130174379 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 19 20:31:28 compute-0 nova_compute[188777]: 2026-02-19 20:31:28.641 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:31:29 compute-0 podman[204724]: time="2026-02-19T20:31:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:31:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:31:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28006 "" "Go-http-client/1.1"
Feb 19 20:31:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:31:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3916 "" "Go-http-client/1.1"
Feb 19 20:31:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:31:30.447 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:31:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:31:30.447 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:31:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:31:30.448 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:31:31 compute-0 openstack_network_exporter[207898]: ERROR   20:31:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:31:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:31:31 compute-0 openstack_network_exporter[207898]: ERROR   20:31:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:31:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:31:33 compute-0 nova_compute[188777]: 2026-02-19 20:31:33.189 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:31:33 compute-0 nova_compute[188777]: 2026-02-19 20:31:33.644 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:31:38 compute-0 nova_compute[188777]: 2026-02-19 20:31:38.191 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:31:38 compute-0 nova_compute[188777]: 2026-02-19 20:31:38.646 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:31:43 compute-0 nova_compute[188777]: 2026-02-19 20:31:43.193 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:31:43 compute-0 nova_compute[188777]: 2026-02-19 20:31:43.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:31:43 compute-0 nova_compute[188777]: 2026-02-19 20:31:43.264 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:31:43 compute-0 podman[250832]: 2026-02-19 20:31:43.410791724 +0000 UTC m=+0.100264136 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, architecture=x86_64, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, container_name=openstack_network_exporter, release=1770267347, io.buildah.version=1.33.7, version=9.7, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=)
Feb 19 20:31:43 compute-0 podman[250833]: 2026-02-19 20:31:43.41386792 +0000 UTC m=+0.093801645 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 19 20:31:43 compute-0 nova_compute[188777]: 2026-02-19 20:31:43.649 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:31:46 compute-0 podman[250875]: 2026-02-19 20:31:46.392962004 +0000 UTC m=+0.073935796 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_managed=true)
Feb 19 20:31:48 compute-0 nova_compute[188777]: 2026-02-19 20:31:48.199 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:31:48 compute-0 nova_compute[188777]: 2026-02-19 20:31:48.651 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:31:49 compute-0 nova_compute[188777]: 2026-02-19 20:31:49.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:31:50 compute-0 podman[250893]: 2026-02-19 20:31:50.36056225 +0000 UTC m=+0.052562119 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, release=1214.1726694543, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, container_name=kepler, config_id=kepler, vendor=Red Hat, Inc., version=9.4, name=ubi9, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, managed_by=edpm_ansible)
Feb 19 20:31:50 compute-0 podman[250894]: 2026-02-19 20:31:50.391854126 +0000 UTC m=+0.080279153 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Feb 19 20:31:51 compute-0 nova_compute[188777]: 2026-02-19 20:31:51.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:31:52 compute-0 nova_compute[188777]: 2026-02-19 20:31:52.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:31:52 compute-0 nova_compute[188777]: 2026-02-19 20:31:52.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:31:52 compute-0 nova_compute[188777]: 2026-02-19 20:31:52.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:31:52 compute-0 nova_compute[188777]: 2026-02-19 20:31:52.291 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:31:52 compute-0 nova_compute[188777]: 2026-02-19 20:31:52.292 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:31:52 compute-0 nova_compute[188777]: 2026-02-19 20:31:52.293 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:31:52 compute-0 nova_compute[188777]: 2026-02-19 20:31:52.293 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:31:52 compute-0 sshd-session[250928]: Invalid user claude from 154.12.80.151 port 54032
Feb 19 20:31:52 compute-0 nova_compute[188777]: 2026-02-19 20:31:52.610 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:31:52 compute-0 nova_compute[188777]: 2026-02-19 20:31:52.611 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5348MB free_disk=72.24295806884766GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:31:52 compute-0 nova_compute[188777]: 2026-02-19 20:31:52.611 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:31:52 compute-0 nova_compute[188777]: 2026-02-19 20:31:52.612 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:31:52 compute-0 nova_compute[188777]: 2026-02-19 20:31:52.674 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:31:52 compute-0 nova_compute[188777]: 2026-02-19 20:31:52.674 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:31:52 compute-0 nova_compute[188777]: 2026-02-19 20:31:52.698 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:31:52 compute-0 nova_compute[188777]: 2026-02-19 20:31:52.713 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:31:52 compute-0 nova_compute[188777]: 2026-02-19 20:31:52.715 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:31:52 compute-0 nova_compute[188777]: 2026-02-19 20:31:52.716 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.104s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:31:52 compute-0 sshd-session[250928]: Received disconnect from 154.12.80.151 port 54032:11: Bye Bye [preauth]
Feb 19 20:31:52 compute-0 sshd-session[250928]: Disconnected from invalid user claude 154.12.80.151 port 54032 [preauth]
Feb 19 20:31:53 compute-0 nova_compute[188777]: 2026-02-19 20:31:53.203 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:31:53 compute-0 nova_compute[188777]: 2026-02-19 20:31:53.655 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:31:53 compute-0 nova_compute[188777]: 2026-02-19 20:31:53.712 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:31:54 compute-0 nova_compute[188777]: 2026-02-19 20:31:54.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:31:54 compute-0 nova_compute[188777]: 2026-02-19 20:31:54.264 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:31:54 compute-0 nova_compute[188777]: 2026-02-19 20:31:54.264 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 19 20:31:54 compute-0 nova_compute[188777]: 2026-02-19 20:31:54.280 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 19 20:31:54 compute-0 podman[250930]: 2026-02-19 20:31:54.42321306 +0000 UTC m=+0.108596005 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 19 20:31:55 compute-0 nova_compute[188777]: 2026-02-19 20:31:55.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:31:56 compute-0 podman[250954]: 2026-02-19 20:31:56.424861826 +0000 UTC m=+0.113747836 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:31:58 compute-0 nova_compute[188777]: 2026-02-19 20:31:58.206 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:31:58 compute-0 nova_compute[188777]: 2026-02-19 20:31:58.658 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:31:59 compute-0 podman[250973]: 2026-02-19 20:31:59.415523511 +0000 UTC m=+0.102855167 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Feb 19 20:31:59 compute-0 rsyslogd[239379]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 19 20:31:59 compute-0 podman[204724]: time="2026-02-19T20:31:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:31:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:31:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28006 "" "Go-http-client/1.1"
Feb 19 20:31:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:31:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3910 "" "Go-http-client/1.1"
Feb 19 20:32:01 compute-0 openstack_network_exporter[207898]: ERROR   20:32:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:32:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:32:01 compute-0 openstack_network_exporter[207898]: ERROR   20:32:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:32:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:32:03 compute-0 nova_compute[188777]: 2026-02-19 20:32:03.209 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:32:03 compute-0 nova_compute[188777]: 2026-02-19 20:32:03.661 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:32:07 compute-0 sshd-session[250999]: Invalid user titu from 103.119.94.10 port 55256
Feb 19 20:32:07 compute-0 sshd-session[250999]: Received disconnect from 103.119.94.10 port 55256:11: Bye Bye [preauth]
Feb 19 20:32:07 compute-0 sshd-session[250999]: Disconnected from invalid user titu 103.119.94.10 port 55256 [preauth]
Feb 19 20:32:08 compute-0 nova_compute[188777]: 2026-02-19 20:32:08.212 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:32:08 compute-0 nova_compute[188777]: 2026-02-19 20:32:08.665 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:32:13 compute-0 nova_compute[188777]: 2026-02-19 20:32:13.214 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:32:13 compute-0 nova_compute[188777]: 2026-02-19 20:32:13.668 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:32:14 compute-0 podman[251002]: 2026-02-19 20:32:14.368395849 +0000 UTC m=+0.051355833 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 19 20:32:14 compute-0 podman[251001]: 2026-02-19 20:32:14.392845261 +0000 UTC m=+0.085062582 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.7, build-date=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, architecture=x86_64)
Feb 19 20:32:17 compute-0 podman[251045]: 2026-02-19 20:32:17.411312342 +0000 UTC m=+0.095718325 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Feb 19 20:32:18 compute-0 nova_compute[188777]: 2026-02-19 20:32:18.216 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:32:18 compute-0 nova_compute[188777]: 2026-02-19 20:32:18.671 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:32:21 compute-0 podman[251065]: 2026-02-19 20:32:21.379204649 +0000 UTC m=+0.064117009 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Feb 19 20:32:21 compute-0 podman[251064]: 2026-02-19 20:32:21.393816364 +0000 UTC m=+0.081105269 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.buildah.version=1.29.0, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., config_id=kepler, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, vcs-type=git, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Feb 19 20:32:23 compute-0 nova_compute[188777]: 2026-02-19 20:32:23.218 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:32:23 compute-0 nova_compute[188777]: 2026-02-19 20:32:23.673 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:32:25 compute-0 podman[251104]: 2026-02-19 20:32:25.375706447 +0000 UTC m=+0.059908948 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 19 20:32:27 compute-0 podman[251128]: 2026-02-19 20:32:27.400574247 +0000 UTC m=+0.080062547 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216)
Feb 19 20:32:28 compute-0 nova_compute[188777]: 2026-02-19 20:32:28.220 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:32:28 compute-0 nova_compute[188777]: 2026-02-19 20:32:28.676 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:32:29 compute-0 podman[204724]: time="2026-02-19T20:32:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:32:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:32:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28006 "" "Go-http-client/1.1"
Feb 19 20:32:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:32:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3917 "" "Go-http-client/1.1"
Feb 19 20:32:30 compute-0 podman[251148]: 2026-02-19 20:32:30.438525804 +0000 UTC m=+0.115984146 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 19 20:32:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:32:30.448 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:32:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:32:30.449 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:32:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:32:30.449 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:32:31 compute-0 openstack_network_exporter[207898]: ERROR   20:32:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:32:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:32:31 compute-0 openstack_network_exporter[207898]: ERROR   20:32:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:32:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:32:33 compute-0 nova_compute[188777]: 2026-02-19 20:32:33.222 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:32:33 compute-0 nova_compute[188777]: 2026-02-19 20:32:33.679 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:32:38 compute-0 nova_compute[188777]: 2026-02-19 20:32:38.224 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:32:38 compute-0 nova_compute[188777]: 2026-02-19 20:32:38.681 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:32:43 compute-0 nova_compute[188777]: 2026-02-19 20:32:43.226 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:32:43 compute-0 nova_compute[188777]: 2026-02-19 20:32:43.683 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:32:44 compute-0 nova_compute[188777]: 2026-02-19 20:32:44.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:32:44 compute-0 nova_compute[188777]: 2026-02-19 20:32:44.264 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:32:44 compute-0 podman[251174]: 2026-02-19 20:32:44.723091862 +0000 UTC m=+0.055149959 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 19 20:32:44 compute-0 podman[251173]: 2026-02-19 20:32:44.730639488 +0000 UTC m=+0.064569573 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, config_id=openstack_network_exporter, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container)
Feb 19 20:32:48 compute-0 nova_compute[188777]: 2026-02-19 20:32:48.229 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:32:48 compute-0 podman[251216]: 2026-02-19 20:32:48.382311947 +0000 UTC m=+0.067188366 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 19 20:32:48 compute-0 nova_compute[188777]: 2026-02-19 20:32:48.685 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:32:50 compute-0 nova_compute[188777]: 2026-02-19 20:32:50.268 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:32:52 compute-0 nova_compute[188777]: 2026-02-19 20:32:52.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:32:52 compute-0 podman[251238]: 2026-02-19 20:32:52.392861844 +0000 UTC m=+0.075473073 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb 19 20:32:52 compute-0 podman[251237]: 2026-02-19 20:32:52.409840044 +0000 UTC m=+0.087541700 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, release-0.7.12=, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., container_name=kepler, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, version=9.4)
Feb 19 20:32:53 compute-0 nova_compute[188777]: 2026-02-19 20:32:53.231 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:32:53 compute-0 nova_compute[188777]: 2026-02-19 20:32:53.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:32:53 compute-0 nova_compute[188777]: 2026-02-19 20:32:53.284 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:32:53 compute-0 nova_compute[188777]: 2026-02-19 20:32:53.285 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:32:53 compute-0 nova_compute[188777]: 2026-02-19 20:32:53.285 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:32:53 compute-0 nova_compute[188777]: 2026-02-19 20:32:53.285 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:32:53 compute-0 nova_compute[188777]: 2026-02-19 20:32:53.685 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:32:53 compute-0 nova_compute[188777]: 2026-02-19 20:32:53.686 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5343MB free_disk=72.24295425415039GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:32:53 compute-0 nova_compute[188777]: 2026-02-19 20:32:53.686 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:32:53 compute-0 nova_compute[188777]: 2026-02-19 20:32:53.687 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:32:53 compute-0 nova_compute[188777]: 2026-02-19 20:32:53.687 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:32:53 compute-0 nova_compute[188777]: 2026-02-19 20:32:53.768 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:32:53 compute-0 nova_compute[188777]: 2026-02-19 20:32:53.768 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:32:53 compute-0 nova_compute[188777]: 2026-02-19 20:32:53.794 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:32:53 compute-0 nova_compute[188777]: 2026-02-19 20:32:53.812 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:32:53 compute-0 nova_compute[188777]: 2026-02-19 20:32:53.816 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:32:53 compute-0 nova_compute[188777]: 2026-02-19 20:32:53.816 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:32:54 compute-0 nova_compute[188777]: 2026-02-19 20:32:54.813 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:32:54 compute-0 nova_compute[188777]: 2026-02-19 20:32:54.814 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:32:54 compute-0 nova_compute[188777]: 2026-02-19 20:32:54.814 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:32:55 compute-0 nova_compute[188777]: 2026-02-19 20:32:55.260 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:32:55 compute-0 nova_compute[188777]: 2026-02-19 20:32:55.279 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:32:55 compute-0 nova_compute[188777]: 2026-02-19 20:32:55.279 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:32:55 compute-0 nova_compute[188777]: 2026-02-19 20:32:55.279 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 19 20:32:55 compute-0 nova_compute[188777]: 2026-02-19 20:32:55.291 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 19 20:32:55 compute-0 nova_compute[188777]: 2026-02-19 20:32:55.291 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:32:56 compute-0 podman[251276]: 2026-02-19 20:32:56.39697382 +0000 UTC m=+0.076693222 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 19 20:32:58 compute-0 nova_compute[188777]: 2026-02-19 20:32:58.237 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:32:58 compute-0 podman[251300]: 2026-02-19 20:32:58.394666132 +0000 UTC m=+0.087461397 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute)
Feb 19 20:32:58 compute-0 nova_compute[188777]: 2026-02-19 20:32:58.690 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:32:59 compute-0 podman[204724]: time="2026-02-19T20:32:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:32:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:32:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28006 "" "Go-http-client/1.1"
Feb 19 20:32:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:32:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3914 "" "Go-http-client/1.1"
Feb 19 20:33:01 compute-0 openstack_network_exporter[207898]: ERROR   20:33:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:33:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:33:01 compute-0 openstack_network_exporter[207898]: ERROR   20:33:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:33:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:33:01 compute-0 podman[251320]: 2026-02-19 20:33:01.456691971 +0000 UTC m=+0.142225194 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 20:33:03 compute-0 nova_compute[188777]: 2026-02-19 20:33:03.238 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:33:03 compute-0 nova_compute[188777]: 2026-02-19 20:33:03.693 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:33:08 compute-0 nova_compute[188777]: 2026-02-19 20:33:08.240 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:33:08 compute-0 nova_compute[188777]: 2026-02-19 20:33:08.931 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:33:13 compute-0 nova_compute[188777]: 2026-02-19 20:33:13.243 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:33:13 compute-0 nova_compute[188777]: 2026-02-19 20:33:13.934 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.150 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.151 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.151 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8abb0e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.151 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fa4f6728800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.152 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8abb0e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.152 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8abb0e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.153 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8abb0e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.153 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8abb0e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.153 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8abb0e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.153 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8abb0e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.153 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8abb0e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.153 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8abb0e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.153 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8abb0e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.153 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a390>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8abb0e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.154 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8abb0e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.154 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8abb0e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.154 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8abb0e0>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8abb0e0>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8abb0e0>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8abb0e0>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.154 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8abb0e0>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.156 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fa4f672a480>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.156 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.156 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fa4f672a180>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.156 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.157 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fa4f672bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.157 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.157 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fa4f672a270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.157 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.157 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fa4f6728ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.157 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.158 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fa4f672a300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.158 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.158 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fa4f672ab70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.156 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8abb0e0>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'power.state': [], 'network.outgoing.bytes.delta': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.158 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8abb0e0>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'power.state': [], 'network.outgoing.bytes.delta': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.158 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8abb0e0>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'power.state': [], 'network.outgoing.bytes.delta': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.158 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8abb0e0>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'power.state': [], 'network.outgoing.bytes.delta': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.158 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.159 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fa4f6728290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.159 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8abb0e0>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'power.state': [], 'network.outgoing.bytes.delta': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.159 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8abb0e0>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'power.state': [], 'network.outgoing.bytes.delta': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.159 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8abb0e0>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'power.state': [], 'network.outgoing.bytes.delta': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.160 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f8abb0e0>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'power.state': [], 'network.outgoing.bytes.delta': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.159 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.160 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fa4f69216a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.160 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.160 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fa4f67286b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.160 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.160 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fa4f67283b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.160 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.160 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fa4f672a120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.160 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.160 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fa4f672a1b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.161 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.161 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fa4f6728410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.161 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.161 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fa4f672a150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.161 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.161 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fa4f6728470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.161 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.161 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fa4f68f6030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.161 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.161 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fa4f672ab10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.161 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.162 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fa4f6728500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.162 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.162 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fa4f672a0c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.162 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.162 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fa4f6728560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.162 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.162 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fa4f67285c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.162 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.162 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fa4f6728620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.162 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.162 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fa4f672be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.162 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.162 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fa4f672be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.163 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.163 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.163 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.163 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.163 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.163 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.163 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.163 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.164 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.164 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.164 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.164 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.164 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.164 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.164 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.164 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.164 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.164 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.164 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.165 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.165 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.165 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.165 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.165 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.165 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.165 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:33:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:33:15.165 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:33:15 compute-0 podman[251346]: 2026-02-19 20:33:15.369889881 +0000 UTC m=+0.062677664 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, distribution-scope=public, maintainer=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, version=9.7, io.buildah.version=1.33.7, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c)
Feb 19 20:33:15 compute-0 podman[251347]: 2026-02-19 20:33:15.39388661 +0000 UTC m=+0.080847431 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 19 20:33:18 compute-0 nova_compute[188777]: 2026-02-19 20:33:18.245 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:33:18 compute-0 nova_compute[188777]: 2026-02-19 20:33:18.937 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:33:19 compute-0 podman[251389]: 2026-02-19 20:33:19.381669466 +0000 UTC m=+0.064447910 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:33:23 compute-0 nova_compute[188777]: 2026-02-19 20:33:23.247 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:33:23 compute-0 podman[251411]: 2026-02-19 20:33:23.375625386 +0000 UTC m=+0.063878302 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, name=ubi9, io.openshift.tags=base rhel9)
Feb 19 20:33:23 compute-0 podman[251412]: 2026-02-19 20:33:23.408011785 +0000 UTC m=+0.090691678 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 19 20:33:23 compute-0 nova_compute[188777]: 2026-02-19 20:33:23.940 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:33:26 compute-0 sshd-session[251409]: Received disconnect from 160.187.147.124 port 60126:11: Bye Bye [preauth]
Feb 19 20:33:26 compute-0 sshd-session[251409]: Disconnected from authenticating user root 160.187.147.124 port 60126 [preauth]
Feb 19 20:33:27 compute-0 podman[251451]: 2026-02-19 20:33:27.35982517 +0000 UTC m=+0.051985942 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 19 20:33:28 compute-0 nova_compute[188777]: 2026-02-19 20:33:28.248 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:33:28 compute-0 nova_compute[188777]: 2026-02-19 20:33:28.945 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:33:29 compute-0 podman[251472]: 2026-02-19 20:33:29.376397111 +0000 UTC m=+0.063090347 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, managed_by=edpm_ansible, org.label-schema.build-date=20260216, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 19 20:33:29 compute-0 podman[204724]: time="2026-02-19T20:33:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:33:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:33:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28006 "" "Go-http-client/1.1"
Feb 19 20:33:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:33:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3915 "" "Go-http-client/1.1"
Feb 19 20:33:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:33:30.449 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:33:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:33:30.449 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:33:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:33:30.450 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:33:31 compute-0 openstack_network_exporter[207898]: ERROR   20:33:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:33:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:33:31 compute-0 openstack_network_exporter[207898]: ERROR   20:33:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:33:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:33:32 compute-0 podman[251491]: 2026-02-19 20:33:32.39414407 +0000 UTC m=+0.083834335 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb 19 20:33:33 compute-0 nova_compute[188777]: 2026-02-19 20:33:33.252 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:33:33 compute-0 nova_compute[188777]: 2026-02-19 20:33:33.948 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:33:38 compute-0 nova_compute[188777]: 2026-02-19 20:33:38.255 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:33:38 compute-0 nova_compute[188777]: 2026-02-19 20:33:38.952 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:33:43 compute-0 nova_compute[188777]: 2026-02-19 20:33:43.259 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:33:43 compute-0 nova_compute[188777]: 2026-02-19 20:33:43.955 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:33:45 compute-0 nova_compute[188777]: 2026-02-19 20:33:45.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:33:45 compute-0 nova_compute[188777]: 2026-02-19 20:33:45.265 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:33:46 compute-0 podman[251518]: 2026-02-19 20:33:46.380128032 +0000 UTC m=+0.063006355 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 19 20:33:46 compute-0 podman[251517]: 2026-02-19 20:33:46.392363453 +0000 UTC m=+0.075704931 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, container_name=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, config_id=openstack_network_exporter, name=ubi9/ubi-minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, org.opencontainers.image.created=2026-02-05T04:57:10Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 19 20:33:48 compute-0 nova_compute[188777]: 2026-02-19 20:33:48.261 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:33:48 compute-0 nova_compute[188777]: 2026-02-19 20:33:48.958 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:33:50 compute-0 podman[251561]: 2026-02-19 20:33:50.394909 +0000 UTC m=+0.086508537 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Feb 19 20:33:51 compute-0 nova_compute[188777]: 2026-02-19 20:33:51.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:33:52 compute-0 nova_compute[188777]: 2026-02-19 20:33:52.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:33:52 compute-0 sshd-session[251580]: Invalid user tms from 83.235.16.111 port 36234
Feb 19 20:33:52 compute-0 sshd-session[251580]: Received disconnect from 83.235.16.111 port 36234:11: Bye Bye [preauth]
Feb 19 20:33:52 compute-0 sshd-session[251580]: Disconnected from invalid user tms 83.235.16.111 port 36234 [preauth]
Feb 19 20:33:53 compute-0 nova_compute[188777]: 2026-02-19 20:33:53.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:33:53 compute-0 nova_compute[188777]: 2026-02-19 20:33:53.264 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:33:53 compute-0 nova_compute[188777]: 2026-02-19 20:33:53.303 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:33:53 compute-0 nova_compute[188777]: 2026-02-19 20:33:53.303 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:33:53 compute-0 nova_compute[188777]: 2026-02-19 20:33:53.304 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:33:53 compute-0 nova_compute[188777]: 2026-02-19 20:33:53.304 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:33:53 compute-0 nova_compute[188777]: 2026-02-19 20:33:53.625 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:33:53 compute-0 nova_compute[188777]: 2026-02-19 20:33:53.627 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5359MB free_disk=72.24295425415039GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:33:53 compute-0 nova_compute[188777]: 2026-02-19 20:33:53.627 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:33:53 compute-0 nova_compute[188777]: 2026-02-19 20:33:53.627 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:33:53 compute-0 nova_compute[188777]: 2026-02-19 20:33:53.840 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:33:53 compute-0 nova_compute[188777]: 2026-02-19 20:33:53.841 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:33:53 compute-0 nova_compute[188777]: 2026-02-19 20:33:53.864 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Refreshing inventories for resource provider c266959e-952e-41ad-bc2e-56513f39ec2d _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 19 20:33:53 compute-0 nova_compute[188777]: 2026-02-19 20:33:53.881 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Updating ProviderTree inventory for provider c266959e-952e-41ad-bc2e-56513f39ec2d from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 19 20:33:53 compute-0 nova_compute[188777]: 2026-02-19 20:33:53.882 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Updating inventory in ProviderTree for provider c266959e-952e-41ad-bc2e-56513f39ec2d with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 19 20:33:53 compute-0 nova_compute[188777]: 2026-02-19 20:33:53.904 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Refreshing aggregate associations for resource provider c266959e-952e-41ad-bc2e-56513f39ec2d, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 19 20:33:53 compute-0 nova_compute[188777]: 2026-02-19 20:33:53.935 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Refreshing trait associations for resource provider c266959e-952e-41ad-bc2e-56513f39ec2d, traits: HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,HW_CPU_X86_SHA,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NODE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,HW_CPU_X86_F16C,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_ABM,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_MMX,HW_CPU_X86_BMI2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 19 20:33:53 compute-0 nova_compute[188777]: 2026-02-19 20:33:53.957 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:33:53 compute-0 nova_compute[188777]: 2026-02-19 20:33:53.960 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:33:54 compute-0 nova_compute[188777]: 2026-02-19 20:33:54.022 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:33:54 compute-0 nova_compute[188777]: 2026-02-19 20:33:54.023 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:33:54 compute-0 nova_compute[188777]: 2026-02-19 20:33:54.023 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.396s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:33:54 compute-0 podman[251583]: 2026-02-19 20:33:54.409995668 +0000 UTC m=+0.093348760 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 19 20:33:54 compute-0 podman[251582]: 2026-02-19 20:33:54.414291073 +0000 UTC m=+0.094626941 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, release-0.7.12=, version=9.4, container_name=kepler, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, config_id=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.buildah.version=1.29.0, vendor=Red Hat, Inc., name=ubi9, com.redhat.component=ubi9-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9)
Feb 19 20:33:56 compute-0 nova_compute[188777]: 2026-02-19 20:33:56.024 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:33:56 compute-0 nova_compute[188777]: 2026-02-19 20:33:56.259 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:33:56 compute-0 nova_compute[188777]: 2026-02-19 20:33:56.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:33:57 compute-0 nova_compute[188777]: 2026-02-19 20:33:57.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:33:57 compute-0 nova_compute[188777]: 2026-02-19 20:33:57.264 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:33:57 compute-0 nova_compute[188777]: 2026-02-19 20:33:57.265 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 19 20:33:57 compute-0 nova_compute[188777]: 2026-02-19 20:33:57.288 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 19 20:33:57 compute-0 nova_compute[188777]: 2026-02-19 20:33:57.288 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:33:58 compute-0 nova_compute[188777]: 2026-02-19 20:33:58.267 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:33:58 compute-0 podman[251619]: 2026-02-19 20:33:58.402964876 +0000 UTC m=+0.087322783 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 19 20:33:58 compute-0 nova_compute[188777]: 2026-02-19 20:33:58.963 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:33:59 compute-0 podman[204724]: time="2026-02-19T20:33:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:33:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:33:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28006 "" "Go-http-client/1.1"
Feb 19 20:33:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:33:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3910 "" "Go-http-client/1.1"
Feb 19 20:34:00 compute-0 podman[251642]: 2026-02-19 20:34:00.405744667 +0000 UTC m=+0.089058498 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0)
Feb 19 20:34:01 compute-0 openstack_network_exporter[207898]: ERROR   20:34:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:34:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:34:01 compute-0 openstack_network_exporter[207898]: ERROR   20:34:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:34:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:34:03 compute-0 nova_compute[188777]: 2026-02-19 20:34:03.268 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:34:03 compute-0 podman[251660]: 2026-02-19 20:34:03.414353451 +0000 UTC m=+0.106922054 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 19 20:34:03 compute-0 nova_compute[188777]: 2026-02-19 20:34:03.965 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:34:08 compute-0 nova_compute[188777]: 2026-02-19 20:34:08.270 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:34:08 compute-0 nova_compute[188777]: 2026-02-19 20:34:08.968 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:34:13 compute-0 nova_compute[188777]: 2026-02-19 20:34:13.273 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:34:13 compute-0 nova_compute[188777]: 2026-02-19 20:34:13.970 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:34:17 compute-0 podman[251685]: 2026-02-19 20:34:17.425857294 +0000 UTC m=+0.101532206 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=9.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_id=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, name=ubi9/ubi-minimal, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1770267347, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Feb 19 20:34:17 compute-0 podman[251686]: 2026-02-19 20:34:17.448386897 +0000 UTC m=+0.118442254 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 19 20:34:18 compute-0 nova_compute[188777]: 2026-02-19 20:34:18.284 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:34:18 compute-0 nova_compute[188777]: 2026-02-19 20:34:18.974 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:34:20 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:34:20.980 108175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1e:ad:15', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:0d:ba:1d:25:53'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 19 20:34:20 compute-0 nova_compute[188777]: 2026-02-19 20:34:20.981 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:34:20 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:34:20.981 108175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 19 20:34:21 compute-0 podman[251733]: 2026-02-19 20:34:21.399233383 +0000 UTC m=+0.088318355 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Feb 19 20:34:23 compute-0 nova_compute[188777]: 2026-02-19 20:34:23.287 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:34:23 compute-0 nova_compute[188777]: 2026-02-19 20:34:23.978 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:34:25 compute-0 podman[251752]: 2026-02-19 20:34:25.395157193 +0000 UTC m=+0.086601150 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., name=ubi9, com.redhat.component=ubi9-container, container_name=kepler, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, managed_by=edpm_ansible, vcs-type=git, io.openshift.tags=base rhel9, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.buildah.version=1.29.0)
Feb 19 20:34:25 compute-0 podman[251753]: 2026-02-19 20:34:25.400983965 +0000 UTC m=+0.083001998 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 19 20:34:26 compute-0 sshd-session[251790]: Received disconnect from 158.180.74.7 port 45928:11: Bye Bye [preauth]
Feb 19 20:34:26 compute-0 sshd-session[251790]: Disconnected from authenticating user root 158.180.74.7 port 45928 [preauth]
Feb 19 20:34:26 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:34:26.990 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e2fe6bb6-fad0-4563-8388-215a30f03e3f, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:34:28 compute-0 nova_compute[188777]: 2026-02-19 20:34:28.292 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:34:28 compute-0 nova_compute[188777]: 2026-02-19 20:34:28.982 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:34:29 compute-0 podman[251794]: 2026-02-19 20:34:29.425457696 +0000 UTC m=+0.102682581 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 19 20:34:29 compute-0 podman[204724]: time="2026-02-19T20:34:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:34:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:34:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28006 "" "Go-http-client/1.1"
Feb 19 20:34:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:34:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3910 "" "Go-http-client/1.1"
Feb 19 20:34:30 compute-0 sshd-session[251792]: Invalid user x from 103.250.11.249 port 40984
Feb 19 20:34:30 compute-0 sshd-session[251792]: Received disconnect from 103.250.11.249 port 40984:11: Bye Bye [preauth]
Feb 19 20:34:30 compute-0 sshd-session[251792]: Disconnected from invalid user x 103.250.11.249 port 40984 [preauth]
Feb 19 20:34:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:34:30.451 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:34:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:34:30.451 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:34:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:34:30.451 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:34:31 compute-0 openstack_network_exporter[207898]: ERROR   20:34:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:34:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:34:31 compute-0 openstack_network_exporter[207898]: ERROR   20:34:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:34:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:34:31 compute-0 podman[251817]: 2026-02-19 20:34:31.415378916 +0000 UTC m=+0.092243516 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260216, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0)
Feb 19 20:34:33 compute-0 nova_compute[188777]: 2026-02-19 20:34:33.294 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:34:33 compute-0 nova_compute[188777]: 2026-02-19 20:34:33.985 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:34:34 compute-0 podman[251838]: 2026-02-19 20:34:34.432508206 +0000 UTC m=+0.117486353 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:34:38 compute-0 nova_compute[188777]: 2026-02-19 20:34:38.296 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:34:38 compute-0 nova_compute[188777]: 2026-02-19 20:34:38.989 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:34:43 compute-0 nova_compute[188777]: 2026-02-19 20:34:43.299 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:34:43 compute-0 nova_compute[188777]: 2026-02-19 20:34:43.991 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:34:45 compute-0 nova_compute[188777]: 2026-02-19 20:34:45.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:34:45 compute-0 nova_compute[188777]: 2026-02-19 20:34:45.265 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:34:48 compute-0 nova_compute[188777]: 2026-02-19 20:34:48.303 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:34:48 compute-0 podman[251864]: 2026-02-19 20:34:48.410154218 +0000 UTC m=+0.099871655 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, vendor=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_id=openstack_network_exporter, version=9.7, managed_by=edpm_ansible, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, container_name=openstack_network_exporter)
Feb 19 20:34:48 compute-0 podman[251865]: 2026-02-19 20:34:48.42049224 +0000 UTC m=+0.098590915 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 19 20:34:48 compute-0 nova_compute[188777]: 2026-02-19 20:34:48.994 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:34:51 compute-0 ovn_controller[98843]: 2026-02-19T20:34:51Z|00065|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Feb 19 20:34:51 compute-0 nova_compute[188777]: 2026-02-19 20:34:51.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:34:52 compute-0 podman[251908]: 2026-02-19 20:34:52.440558073 +0000 UTC m=+0.119026062 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 19 20:34:53 compute-0 nova_compute[188777]: 2026-02-19 20:34:53.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:34:53 compute-0 nova_compute[188777]: 2026-02-19 20:34:53.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:34:53 compute-0 nova_compute[188777]: 2026-02-19 20:34:53.304 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:34:53 compute-0 nova_compute[188777]: 2026-02-19 20:34:53.307 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:34:53 compute-0 nova_compute[188777]: 2026-02-19 20:34:53.307 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:34:53 compute-0 nova_compute[188777]: 2026-02-19 20:34:53.307 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:34:53 compute-0 nova_compute[188777]: 2026-02-19 20:34:53.307 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:34:53 compute-0 nova_compute[188777]: 2026-02-19 20:34:53.610 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:34:53 compute-0 nova_compute[188777]: 2026-02-19 20:34:53.612 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5347MB free_disk=72.24258804321289GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:34:53 compute-0 nova_compute[188777]: 2026-02-19 20:34:53.612 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:34:53 compute-0 nova_compute[188777]: 2026-02-19 20:34:53.613 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:34:53 compute-0 nova_compute[188777]: 2026-02-19 20:34:53.891 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:34:53 compute-0 nova_compute[188777]: 2026-02-19 20:34:53.891 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:34:53 compute-0 nova_compute[188777]: 2026-02-19 20:34:53.996 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:34:54 compute-0 nova_compute[188777]: 2026-02-19 20:34:54.028 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:34:54 compute-0 nova_compute[188777]: 2026-02-19 20:34:54.042 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:34:54 compute-0 nova_compute[188777]: 2026-02-19 20:34:54.043 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:34:54 compute-0 nova_compute[188777]: 2026-02-19 20:34:54.044 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.431s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:34:54 compute-0 nova_compute[188777]: 2026-02-19 20:34:54.044 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:34:54 compute-0 nova_compute[188777]: 2026-02-19 20:34:54.045 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 19 20:34:56 compute-0 podman[251927]: 2026-02-19 20:34:56.406032314 +0000 UTC m=+0.097788209 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=9.4, architecture=x86_64, com.redhat.component=ubi9-container, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1214.1726694543, config_id=kepler, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, release-0.7.12=, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30)
Feb 19 20:34:56 compute-0 podman[251928]: 2026-02-19 20:34:56.415577722 +0000 UTC m=+0.102258488 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Feb 19 20:34:57 compute-0 nova_compute[188777]: 2026-02-19 20:34:57.052 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:34:57 compute-0 nova_compute[188777]: 2026-02-19 20:34:57.052 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:34:57 compute-0 nova_compute[188777]: 2026-02-19 20:34:57.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:34:57 compute-0 nova_compute[188777]: 2026-02-19 20:34:57.264 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:34:57 compute-0 nova_compute[188777]: 2026-02-19 20:34:57.264 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 19 20:34:57 compute-0 nova_compute[188777]: 2026-02-19 20:34:57.282 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 19 20:34:57 compute-0 nova_compute[188777]: 2026-02-19 20:34:57.282 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:34:58 compute-0 nova_compute[188777]: 2026-02-19 20:34:58.306 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:34:59 compute-0 nova_compute[188777]: 2026-02-19 20:34:58.999 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:34:59 compute-0 nova_compute[188777]: 2026-02-19 20:34:59.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:34:59 compute-0 nova_compute[188777]: 2026-02-19 20:34:59.290 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:34:59 compute-0 podman[204724]: time="2026-02-19T20:34:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:34:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:34:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28006 "" "Go-http-client/1.1"
Feb 19 20:34:59 compute-0 nova_compute[188777]: 2026-02-19 20:34:59.761 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:34:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:34:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3911 "" "Go-http-client/1.1"
Feb 19 20:34:59 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:34:59.763 108175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1e:ad:15', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:0d:ba:1d:25:53'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 19 20:34:59 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:34:59.772 108175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 19 20:35:00 compute-0 podman[251970]: 2026-02-19 20:35:00.459941022 +0000 UTC m=+0.137373063 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 19 20:35:01 compute-0 openstack_network_exporter[207898]: ERROR   20:35:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:35:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:35:01 compute-0 openstack_network_exporter[207898]: ERROR   20:35:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:35:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:35:02 compute-0 podman[251994]: 2026-02-19 20:35:02.413401046 +0000 UTC m=+0.096870291 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260216)
Feb 19 20:35:03 compute-0 nova_compute[188777]: 2026-02-19 20:35:03.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:35:03 compute-0 nova_compute[188777]: 2026-02-19 20:35:03.308 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:04 compute-0 nova_compute[188777]: 2026-02-19 20:35:04.002 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:05 compute-0 nova_compute[188777]: 2026-02-19 20:35:05.397 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:05 compute-0 podman[252014]: 2026-02-19 20:35:05.413831985 +0000 UTC m=+0.103700964 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 19 20:35:05 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:05.775 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e2fe6bb6-fad0-4563-8388-215a30f03e3f, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:35:05 compute-0 nova_compute[188777]: 2026-02-19 20:35:05.949 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:06 compute-0 nova_compute[188777]: 2026-02-19 20:35:06.312 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:08 compute-0 nova_compute[188777]: 2026-02-19 20:35:08.310 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:08 compute-0 nova_compute[188777]: 2026-02-19 20:35:08.966 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:09 compute-0 nova_compute[188777]: 2026-02-19 20:35:09.004 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:09 compute-0 nova_compute[188777]: 2026-02-19 20:35:09.059 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:12 compute-0 nova_compute[188777]: 2026-02-19 20:35:12.813 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:13 compute-0 nova_compute[188777]: 2026-02-19 20:35:13.283 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:35:13 compute-0 nova_compute[188777]: 2026-02-19 20:35:13.284 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 19 20:35:13 compute-0 nova_compute[188777]: 2026-02-19 20:35:13.313 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:13 compute-0 nova_compute[188777]: 2026-02-19 20:35:13.417 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 19 20:35:14 compute-0 nova_compute[188777]: 2026-02-19 20:35:14.006 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:15 compute-0 nova_compute[188777]: 2026-02-19 20:35:15.115 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.151 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.151 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.152 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4fb79be60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.152 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fa4f6728800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.153 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4fb79be60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.153 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4fb79be60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.153 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4fb79be60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.153 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4fb79be60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.153 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4fb79be60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.153 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4fb79be60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.153 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.153 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4fb79be60>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.154 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4fb79be60>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.154 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4fb79be60>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.154 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a390>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4fb79be60>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.154 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4fb79be60>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.154 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4fb79be60>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.154 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4fb79be60>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4fb79be60>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4fb79be60>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4fb79be60>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4fb79be60>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4fb79be60>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4fb79be60>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4fb79be60>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4fb79be60>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4fb79be60>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4fb79be60>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.154 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fa4f672a480>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.156 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.156 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fa4f672a180>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.156 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.156 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fa4f672bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.156 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4fb79be60>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.156 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4fb79be60>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'network.incoming.bytes.rate': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.156 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fa4f672a270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.157 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.157 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fa4f6728ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.157 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.157 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fa4f672a300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.157 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.157 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fa4f672ab70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.157 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.157 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fa4f6728290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.157 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.157 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fa4f69216a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.157 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.157 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fa4f67286b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.157 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.157 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fa4f67283b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.158 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.158 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fa4f672a120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.158 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.158 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fa4f672a1b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.158 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.158 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fa4f6728410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.158 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.158 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fa4f672a150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.158 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.158 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fa4f6728470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.158 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.158 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fa4f68f6030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.158 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.158 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fa4f672ab10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.159 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.159 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fa4f6728500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.159 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.159 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fa4f672a0c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.159 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.159 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fa4f6728560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.159 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.159 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fa4f67285c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.159 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.159 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fa4f6728620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.159 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.159 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fa4f672be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.159 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.159 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fa4f672be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.159 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.160 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.160 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.160 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.160 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.160 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.160 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.160 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.160 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.160 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.160 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.160 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.160 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.160 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.160 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.161 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.161 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.161 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.161 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.161 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.161 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.161 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.161 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.161 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.161 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.161 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:35:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:35:15.161 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:35:15 compute-0 nova_compute[188777]: 2026-02-19 20:35:15.862 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:18 compute-0 nova_compute[188777]: 2026-02-19 20:35:18.314 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:19 compute-0 nova_compute[188777]: 2026-02-19 20:35:19.008 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:19 compute-0 podman[252043]: 2026-02-19 20:35:19.393565759 +0000 UTC m=+0.080129408 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 19 20:35:19 compute-0 podman[252042]: 2026-02-19 20:35:19.398349159 +0000 UTC m=+0.085279969 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, name=ubi9/ubi-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, release=1770267347, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, container_name=openstack_network_exporter, io.openshift.expose-services=, version=9.7)
Feb 19 20:35:19 compute-0 nova_compute[188777]: 2026-02-19 20:35:19.528 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:35:19 compute-0 nova_compute[188777]: 2026-02-19 20:35:19.638 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:19 compute-0 nova_compute[188777]: 2026-02-19 20:35:19.815 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:21 compute-0 sshd-session[252084]: Received disconnect from 154.12.80.151 port 36594:11: Bye Bye [preauth]
Feb 19 20:35:21 compute-0 sshd-session[252084]: Disconnected from authenticating user root 154.12.80.151 port 36594 [preauth]
Feb 19 20:35:23 compute-0 nova_compute[188777]: 2026-02-19 20:35:23.316 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:23 compute-0 podman[252086]: 2026-02-19 20:35:23.389767329 +0000 UTC m=+0.075488564 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 19 20:35:24 compute-0 nova_compute[188777]: 2026-02-19 20:35:24.011 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:25 compute-0 nova_compute[188777]: 2026-02-19 20:35:25.220 188781 DEBUG oslo_concurrency.lockutils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Acquiring lock "7cfaa330-b089-4421-aad5-ee9cdec71c71" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:35:25 compute-0 nova_compute[188777]: 2026-02-19 20:35:25.221 188781 DEBUG oslo_concurrency.lockutils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Lock "7cfaa330-b089-4421-aad5-ee9cdec71c71" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:35:25 compute-0 nova_compute[188777]: 2026-02-19 20:35:25.241 188781 DEBUG nova.compute.manager [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 19 20:35:25 compute-0 nova_compute[188777]: 2026-02-19 20:35:25.401 188781 DEBUG oslo_concurrency.lockutils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:35:25 compute-0 nova_compute[188777]: 2026-02-19 20:35:25.402 188781 DEBUG oslo_concurrency.lockutils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:35:25 compute-0 nova_compute[188777]: 2026-02-19 20:35:25.419 188781 DEBUG nova.virt.hardware [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 19 20:35:25 compute-0 nova_compute[188777]: 2026-02-19 20:35:25.420 188781 INFO nova.compute.claims [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Claim successful on node compute-0.ctlplane.example.com
Feb 19 20:35:25 compute-0 nova_compute[188777]: 2026-02-19 20:35:25.592 188781 DEBUG nova.compute.provider_tree [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:35:25 compute-0 nova_compute[188777]: 2026-02-19 20:35:25.608 188781 DEBUG nova.scheduler.client.report [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:35:25 compute-0 nova_compute[188777]: 2026-02-19 20:35:25.638 188781 DEBUG oslo_concurrency.lockutils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.237s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:35:25 compute-0 nova_compute[188777]: 2026-02-19 20:35:25.639 188781 DEBUG nova.compute.manager [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 19 20:35:25 compute-0 nova_compute[188777]: 2026-02-19 20:35:25.690 188781 DEBUG nova.compute.manager [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 19 20:35:25 compute-0 nova_compute[188777]: 2026-02-19 20:35:25.691 188781 DEBUG nova.network.neutron [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 19 20:35:25 compute-0 nova_compute[188777]: 2026-02-19 20:35:25.710 188781 INFO nova.virt.libvirt.driver [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 19 20:35:25 compute-0 nova_compute[188777]: 2026-02-19 20:35:25.745 188781 DEBUG nova.compute.manager [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 19 20:35:25 compute-0 nova_compute[188777]: 2026-02-19 20:35:25.849 188781 DEBUG nova.compute.manager [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 19 20:35:25 compute-0 nova_compute[188777]: 2026-02-19 20:35:25.851 188781 DEBUG nova.virt.libvirt.driver [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 19 20:35:25 compute-0 nova_compute[188777]: 2026-02-19 20:35:25.853 188781 INFO nova.virt.libvirt.driver [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Creating image(s)
Feb 19 20:35:25 compute-0 nova_compute[188777]: 2026-02-19 20:35:25.854 188781 DEBUG oslo_concurrency.lockutils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Acquiring lock "/var/lib/nova/instances/7cfaa330-b089-4421-aad5-ee9cdec71c71/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:35:25 compute-0 nova_compute[188777]: 2026-02-19 20:35:25.855 188781 DEBUG oslo_concurrency.lockutils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Lock "/var/lib/nova/instances/7cfaa330-b089-4421-aad5-ee9cdec71c71/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:35:25 compute-0 nova_compute[188777]: 2026-02-19 20:35:25.856 188781 DEBUG oslo_concurrency.lockutils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Lock "/var/lib/nova/instances/7cfaa330-b089-4421-aad5-ee9cdec71c71/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:35:25 compute-0 nova_compute[188777]: 2026-02-19 20:35:25.857 188781 DEBUG oslo_concurrency.lockutils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Acquiring lock "a9fd80910f614000293e8e5ea927829d2f3ef59c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:35:25 compute-0 nova_compute[188777]: 2026-02-19 20:35:25.858 188781 DEBUG oslo_concurrency.lockutils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Lock "a9fd80910f614000293e8e5ea927829d2f3ef59c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:35:26 compute-0 nova_compute[188777]: 2026-02-19 20:35:26.311 188781 DEBUG nova.policy [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1a7ed38b6d6a44dabe6c44e6375b7b29', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '65e6bca909aa4dd3ab1eecef7ed2aa09', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 19 20:35:27 compute-0 nova_compute[188777]: 2026-02-19 20:35:27.277 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:35:27 compute-0 podman[252106]: 2026-02-19 20:35:27.366647405 +0000 UTC m=+0.055223782 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 19 20:35:27 compute-0 podman[252105]: 2026-02-19 20:35:27.371385533 +0000 UTC m=+0.063337246 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.buildah.version=1.29.0, io.openshift.expose-services=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, config_id=kepler, name=ubi9, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64)
Feb 19 20:35:27 compute-0 nova_compute[188777]: 2026-02-19 20:35:27.636 188781 WARNING nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor.
Feb 19 20:35:27 compute-0 nova_compute[188777]: 2026-02-19 20:35:27.636 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Triggering sync for uuid 7cfaa330-b089-4421-aad5-ee9cdec71c71 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 19 20:35:27 compute-0 nova_compute[188777]: 2026-02-19 20:35:27.638 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "7cfaa330-b089-4421-aad5-ee9cdec71c71" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:35:28 compute-0 nova_compute[188777]: 2026-02-19 20:35:28.186 188781 DEBUG oslo_concurrency.processutils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:35:28 compute-0 nova_compute[188777]: 2026-02-19 20:35:28.263 188781 DEBUG oslo_concurrency.processutils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c.part --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:35:28 compute-0 nova_compute[188777]: 2026-02-19 20:35:28.265 188781 DEBUG nova.virt.images [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] 17b9bce8-a91b-495d-ac33-cf63893413f9 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Feb 19 20:35:28 compute-0 nova_compute[188777]: 2026-02-19 20:35:28.268 188781 DEBUG nova.privsep.utils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Feb 19 20:35:28 compute-0 nova_compute[188777]: 2026-02-19 20:35:28.269 188781 DEBUG oslo_concurrency.processutils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c.part /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:35:28 compute-0 nova_compute[188777]: 2026-02-19 20:35:28.320 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:28 compute-0 nova_compute[188777]: 2026-02-19 20:35:28.611 188781 DEBUG oslo_concurrency.processutils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c.part /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c.converted" returned: 0 in 0.342s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:35:28 compute-0 nova_compute[188777]: 2026-02-19 20:35:28.615 188781 DEBUG oslo_concurrency.processutils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:35:28 compute-0 nova_compute[188777]: 2026-02-19 20:35:28.662 188781 DEBUG oslo_concurrency.processutils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c.converted --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:35:28 compute-0 nova_compute[188777]: 2026-02-19 20:35:28.664 188781 DEBUG oslo_concurrency.lockutils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Lock "a9fd80910f614000293e8e5ea927829d2f3ef59c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:35:28 compute-0 nova_compute[188777]: 2026-02-19 20:35:28.698 188781 DEBUG oslo_concurrency.processutils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:35:28 compute-0 nova_compute[188777]: 2026-02-19 20:35:28.778 188781 DEBUG oslo_concurrency.processutils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:35:28 compute-0 nova_compute[188777]: 2026-02-19 20:35:28.780 188781 DEBUG oslo_concurrency.lockutils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Acquiring lock "a9fd80910f614000293e8e5ea927829d2f3ef59c" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:35:28 compute-0 nova_compute[188777]: 2026-02-19 20:35:28.781 188781 DEBUG oslo_concurrency.lockutils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Lock "a9fd80910f614000293e8e5ea927829d2f3ef59c" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:35:28 compute-0 nova_compute[188777]: 2026-02-19 20:35:28.802 188781 DEBUG oslo_concurrency.processutils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:35:28 compute-0 nova_compute[188777]: 2026-02-19 20:35:28.867 188781 DEBUG oslo_concurrency.processutils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:35:28 compute-0 nova_compute[188777]: 2026-02-19 20:35:28.868 188781 DEBUG oslo_concurrency.processutils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c,backing_fmt=raw /var/lib/nova/instances/7cfaa330-b089-4421-aad5-ee9cdec71c71/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:35:28 compute-0 nova_compute[188777]: 2026-02-19 20:35:28.893 188781 DEBUG nova.network.neutron [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Successfully created port: a8c2bacc-6880-4dc4-a4de-24561426643c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 19 20:35:28 compute-0 nova_compute[188777]: 2026-02-19 20:35:28.908 188781 DEBUG oslo_concurrency.processutils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c,backing_fmt=raw /var/lib/nova/instances/7cfaa330-b089-4421-aad5-ee9cdec71c71/disk 1073741824" returned: 0 in 0.040s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:35:28 compute-0 nova_compute[188777]: 2026-02-19 20:35:28.910 188781 DEBUG oslo_concurrency.lockutils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Lock "a9fd80910f614000293e8e5ea927829d2f3ef59c" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.128s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:35:28 compute-0 nova_compute[188777]: 2026-02-19 20:35:28.911 188781 DEBUG oslo_concurrency.processutils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:35:28 compute-0 nova_compute[188777]: 2026-02-19 20:35:28.975 188781 DEBUG oslo_concurrency.processutils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:35:28 compute-0 nova_compute[188777]: 2026-02-19 20:35:28.976 188781 DEBUG nova.virt.disk.api [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Checking if we can resize image /var/lib/nova/instances/7cfaa330-b089-4421-aad5-ee9cdec71c71/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Feb 19 20:35:28 compute-0 nova_compute[188777]: 2026-02-19 20:35:28.977 188781 DEBUG oslo_concurrency.processutils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7cfaa330-b089-4421-aad5-ee9cdec71c71/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:35:29 compute-0 nova_compute[188777]: 2026-02-19 20:35:29.016 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:29 compute-0 nova_compute[188777]: 2026-02-19 20:35:29.041 188781 DEBUG oslo_concurrency.processutils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7cfaa330-b089-4421-aad5-ee9cdec71c71/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:35:29 compute-0 nova_compute[188777]: 2026-02-19 20:35:29.042 188781 DEBUG nova.virt.disk.api [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Cannot resize image /var/lib/nova/instances/7cfaa330-b089-4421-aad5-ee9cdec71c71/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Feb 19 20:35:29 compute-0 nova_compute[188777]: 2026-02-19 20:35:29.042 188781 DEBUG nova.objects.instance [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Lazy-loading 'migration_context' on Instance uuid 7cfaa330-b089-4421-aad5-ee9cdec71c71 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:35:29 compute-0 nova_compute[188777]: 2026-02-19 20:35:29.062 188781 DEBUG nova.virt.libvirt.driver [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 19 20:35:29 compute-0 nova_compute[188777]: 2026-02-19 20:35:29.063 188781 DEBUG nova.virt.libvirt.driver [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Ensure instance console log exists: /var/lib/nova/instances/7cfaa330-b089-4421-aad5-ee9cdec71c71/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 19 20:35:29 compute-0 nova_compute[188777]: 2026-02-19 20:35:29.064 188781 DEBUG oslo_concurrency.lockutils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:35:29 compute-0 nova_compute[188777]: 2026-02-19 20:35:29.064 188781 DEBUG oslo_concurrency.lockutils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:35:29 compute-0 nova_compute[188777]: 2026-02-19 20:35:29.065 188781 DEBUG oslo_concurrency.lockutils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:35:29 compute-0 podman[204724]: time="2026-02-19T20:35:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:35:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:35:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28006 "" "Go-http-client/1.1"
Feb 19 20:35:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:35:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3911 "" "Go-http-client/1.1"
Feb 19 20:35:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:30.452 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:35:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:30.452 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:35:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:30.452 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:35:31 compute-0 nova_compute[188777]: 2026-02-19 20:35:31.300 188781 DEBUG oslo_concurrency.lockutils [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Acquiring lock "da31f324-38ad-4f77-b724-3ef1628be336" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:35:31 compute-0 nova_compute[188777]: 2026-02-19 20:35:31.302 188781 DEBUG oslo_concurrency.lockutils [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Lock "da31f324-38ad-4f77-b724-3ef1628be336" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:35:31 compute-0 nova_compute[188777]: 2026-02-19 20:35:31.326 188781 DEBUG nova.compute.manager [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 19 20:35:31 compute-0 podman[252171]: 2026-02-19 20:35:31.389248238 +0000 UTC m=+0.080789710 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 19 20:35:31 compute-0 openstack_network_exporter[207898]: ERROR   20:35:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:35:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:35:31 compute-0 openstack_network_exporter[207898]: ERROR   20:35:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:35:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:35:31 compute-0 nova_compute[188777]: 2026-02-19 20:35:31.471 188781 DEBUG oslo_concurrency.lockutils [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:35:31 compute-0 nova_compute[188777]: 2026-02-19 20:35:31.471 188781 DEBUG oslo_concurrency.lockutils [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:35:31 compute-0 nova_compute[188777]: 2026-02-19 20:35:31.479 188781 DEBUG nova.virt.hardware [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 19 20:35:31 compute-0 nova_compute[188777]: 2026-02-19 20:35:31.480 188781 INFO nova.compute.claims [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Claim successful on node compute-0.ctlplane.example.com
Feb 19 20:35:31 compute-0 nova_compute[188777]: 2026-02-19 20:35:31.529 188781 DEBUG nova.network.neutron [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Successfully updated port: a8c2bacc-6880-4dc4-a4de-24561426643c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 19 20:35:31 compute-0 nova_compute[188777]: 2026-02-19 20:35:31.547 188781 DEBUG oslo_concurrency.lockutils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Acquiring lock "refresh_cache-7cfaa330-b089-4421-aad5-ee9cdec71c71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:35:31 compute-0 nova_compute[188777]: 2026-02-19 20:35:31.548 188781 DEBUG oslo_concurrency.lockutils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Acquired lock "refresh_cache-7cfaa330-b089-4421-aad5-ee9cdec71c71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:35:31 compute-0 nova_compute[188777]: 2026-02-19 20:35:31.548 188781 DEBUG nova.network.neutron [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 19 20:35:31 compute-0 nova_compute[188777]: 2026-02-19 20:35:31.620 188781 DEBUG nova.compute.provider_tree [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:35:31 compute-0 nova_compute[188777]: 2026-02-19 20:35:31.639 188781 DEBUG nova.scheduler.client.report [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:35:31 compute-0 nova_compute[188777]: 2026-02-19 20:35:31.665 188781 DEBUG oslo_concurrency.lockutils [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:35:31 compute-0 nova_compute[188777]: 2026-02-19 20:35:31.666 188781 DEBUG nova.compute.manager [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 19 20:35:31 compute-0 nova_compute[188777]: 2026-02-19 20:35:31.717 188781 DEBUG nova.compute.manager [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 19 20:35:31 compute-0 nova_compute[188777]: 2026-02-19 20:35:31.718 188781 DEBUG nova.network.neutron [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 19 20:35:31 compute-0 nova_compute[188777]: 2026-02-19 20:35:31.737 188781 INFO nova.virt.libvirt.driver [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 19 20:35:31 compute-0 nova_compute[188777]: 2026-02-19 20:35:31.753 188781 DEBUG nova.compute.manager [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 19 20:35:31 compute-0 nova_compute[188777]: 2026-02-19 20:35:31.841 188781 DEBUG nova.network.neutron [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 19 20:35:31 compute-0 nova_compute[188777]: 2026-02-19 20:35:31.854 188781 DEBUG nova.compute.manager [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 19 20:35:31 compute-0 nova_compute[188777]: 2026-02-19 20:35:31.855 188781 DEBUG nova.virt.libvirt.driver [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 19 20:35:31 compute-0 nova_compute[188777]: 2026-02-19 20:35:31.856 188781 INFO nova.virt.libvirt.driver [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Creating image(s)
Feb 19 20:35:31 compute-0 nova_compute[188777]: 2026-02-19 20:35:31.857 188781 DEBUG oslo_concurrency.lockutils [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Acquiring lock "/var/lib/nova/instances/da31f324-38ad-4f77-b724-3ef1628be336/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:35:31 compute-0 nova_compute[188777]: 2026-02-19 20:35:31.857 188781 DEBUG oslo_concurrency.lockutils [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Lock "/var/lib/nova/instances/da31f324-38ad-4f77-b724-3ef1628be336/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:35:31 compute-0 nova_compute[188777]: 2026-02-19 20:35:31.858 188781 DEBUG oslo_concurrency.lockutils [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Lock "/var/lib/nova/instances/da31f324-38ad-4f77-b724-3ef1628be336/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:35:31 compute-0 nova_compute[188777]: 2026-02-19 20:35:31.871 188781 DEBUG oslo_concurrency.processutils [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:35:31 compute-0 nova_compute[188777]: 2026-02-19 20:35:31.924 188781 DEBUG oslo_concurrency.processutils [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:35:31 compute-0 nova_compute[188777]: 2026-02-19 20:35:31.926 188781 DEBUG oslo_concurrency.lockutils [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Acquiring lock "a9fd80910f614000293e8e5ea927829d2f3ef59c" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:35:31 compute-0 nova_compute[188777]: 2026-02-19 20:35:31.927 188781 DEBUG oslo_concurrency.lockutils [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Lock "a9fd80910f614000293e8e5ea927829d2f3ef59c" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:35:31 compute-0 nova_compute[188777]: 2026-02-19 20:35:31.944 188781 DEBUG oslo_concurrency.processutils [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:35:31 compute-0 nova_compute[188777]: 2026-02-19 20:35:31.995 188781 DEBUG oslo_concurrency.processutils [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:35:31 compute-0 nova_compute[188777]: 2026-02-19 20:35:31.997 188781 DEBUG oslo_concurrency.processutils [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c,backing_fmt=raw /var/lib/nova/instances/da31f324-38ad-4f77-b724-3ef1628be336/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:35:32 compute-0 nova_compute[188777]: 2026-02-19 20:35:32.036 188781 DEBUG nova.policy [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '43931603bc9f40eab8e548129d4c50cb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3c8b3e035bb347acad9c4027457ee296', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 19 20:35:32 compute-0 nova_compute[188777]: 2026-02-19 20:35:32.040 188781 DEBUG oslo_concurrency.processutils [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c,backing_fmt=raw /var/lib/nova/instances/da31f324-38ad-4f77-b724-3ef1628be336/disk 1073741824" returned: 0 in 0.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:35:32 compute-0 nova_compute[188777]: 2026-02-19 20:35:32.041 188781 DEBUG oslo_concurrency.lockutils [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Lock "a9fd80910f614000293e8e5ea927829d2f3ef59c" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:35:32 compute-0 nova_compute[188777]: 2026-02-19 20:35:32.042 188781 DEBUG oslo_concurrency.processutils [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:35:32 compute-0 nova_compute[188777]: 2026-02-19 20:35:32.094 188781 DEBUG oslo_concurrency.processutils [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:35:32 compute-0 nova_compute[188777]: 2026-02-19 20:35:32.096 188781 DEBUG nova.virt.disk.api [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Checking if we can resize image /var/lib/nova/instances/da31f324-38ad-4f77-b724-3ef1628be336/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Feb 19 20:35:32 compute-0 nova_compute[188777]: 2026-02-19 20:35:32.096 188781 DEBUG oslo_concurrency.processutils [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/da31f324-38ad-4f77-b724-3ef1628be336/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:35:32 compute-0 nova_compute[188777]: 2026-02-19 20:35:32.151 188781 DEBUG oslo_concurrency.processutils [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/da31f324-38ad-4f77-b724-3ef1628be336/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:35:32 compute-0 nova_compute[188777]: 2026-02-19 20:35:32.152 188781 DEBUG nova.virt.disk.api [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Cannot resize image /var/lib/nova/instances/da31f324-38ad-4f77-b724-3ef1628be336/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Feb 19 20:35:32 compute-0 nova_compute[188777]: 2026-02-19 20:35:32.153 188781 DEBUG nova.objects.instance [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Lazy-loading 'migration_context' on Instance uuid da31f324-38ad-4f77-b724-3ef1628be336 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:35:32 compute-0 nova_compute[188777]: 2026-02-19 20:35:32.179 188781 DEBUG nova.virt.libvirt.driver [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 19 20:35:32 compute-0 nova_compute[188777]: 2026-02-19 20:35:32.180 188781 DEBUG nova.virt.libvirt.driver [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Ensure instance console log exists: /var/lib/nova/instances/da31f324-38ad-4f77-b724-3ef1628be336/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 19 20:35:32 compute-0 nova_compute[188777]: 2026-02-19 20:35:32.181 188781 DEBUG oslo_concurrency.lockutils [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:35:32 compute-0 nova_compute[188777]: 2026-02-19 20:35:32.181 188781 DEBUG oslo_concurrency.lockutils [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:35:32 compute-0 nova_compute[188777]: 2026-02-19 20:35:32.182 188781 DEBUG oslo_concurrency.lockutils [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:35:32 compute-0 nova_compute[188777]: 2026-02-19 20:35:32.931 188781 DEBUG nova.compute.manager [req-60ea146d-81bc-4e10-886b-491f5cd3b4af req-c4a51392-e520-479e-89c0-c1f7f1c79ee1 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Received event network-changed-a8c2bacc-6880-4dc4-a4de-24561426643c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:35:32 compute-0 nova_compute[188777]: 2026-02-19 20:35:32.932 188781 DEBUG nova.compute.manager [req-60ea146d-81bc-4e10-886b-491f5cd3b4af req-c4a51392-e520-479e-89c0-c1f7f1c79ee1 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Refreshing instance network info cache due to event network-changed-a8c2bacc-6880-4dc4-a4de-24561426643c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 19 20:35:32 compute-0 nova_compute[188777]: 2026-02-19 20:35:32.933 188781 DEBUG oslo_concurrency.lockutils [req-60ea146d-81bc-4e10-886b-491f5cd3b4af req-c4a51392-e520-479e-89c0-c1f7f1c79ee1 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "refresh_cache-7cfaa330-b089-4421-aad5-ee9cdec71c71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.030 188781 DEBUG oslo_concurrency.lockutils [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Acquiring lock "7a56de80-4437-4013-96c2-be1937f088e1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.031 188781 DEBUG oslo_concurrency.lockutils [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Lock "7a56de80-4437-4013-96c2-be1937f088e1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.049 188781 DEBUG nova.compute.manager [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.128 188781 DEBUG oslo_concurrency.lockutils [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.129 188781 DEBUG oslo_concurrency.lockutils [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.138 188781 DEBUG nova.virt.hardware [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.139 188781 INFO nova.compute.claims [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Claim successful on node compute-0.ctlplane.example.com
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.320 188781 DEBUG nova.compute.provider_tree [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.323 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.335 188781 DEBUG nova.scheduler.client.report [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.363 188781 DEBUG oslo_concurrency.lockutils [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.234s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.364 188781 DEBUG nova.compute.manager [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.368 188781 DEBUG nova.network.neutron [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Updating instance_info_cache with network_info: [{"id": "a8c2bacc-6880-4dc4-a4de-24561426643c", "address": "fa:16:3e:2e:3a:32", "network": {"id": "01786654-eac7-4d52-bce1-6a98f80c6941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1217512163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65e6bca909aa4dd3ab1eecef7ed2aa09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c2bacc-68", "ovs_interfaceid": "a8c2bacc-6880-4dc4-a4de-24561426643c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:35:33 compute-0 podman[252211]: 2026-02-19 20:35:33.403529436 +0000 UTC m=+0.087580961 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.406 188781 DEBUG oslo_concurrency.lockutils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Releasing lock "refresh_cache-7cfaa330-b089-4421-aad5-ee9cdec71c71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.406 188781 DEBUG nova.compute.manager [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Instance network_info: |[{"id": "a8c2bacc-6880-4dc4-a4de-24561426643c", "address": "fa:16:3e:2e:3a:32", "network": {"id": "01786654-eac7-4d52-bce1-6a98f80c6941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1217512163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65e6bca909aa4dd3ab1eecef7ed2aa09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c2bacc-68", "ovs_interfaceid": "a8c2bacc-6880-4dc4-a4de-24561426643c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.406 188781 DEBUG oslo_concurrency.lockutils [req-60ea146d-81bc-4e10-886b-491f5cd3b4af req-c4a51392-e520-479e-89c0-c1f7f1c79ee1 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquired lock "refresh_cache-7cfaa330-b089-4421-aad5-ee9cdec71c71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.407 188781 DEBUG nova.network.neutron [req-60ea146d-81bc-4e10-886b-491f5cd3b4af req-c4a51392-e520-479e-89c0-c1f7f1c79ee1 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Refreshing network info cache for port a8c2bacc-6880-4dc4-a4de-24561426643c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.409 188781 DEBUG nova.virt.libvirt.driver [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Start _get_guest_xml network_info=[{"id": "a8c2bacc-6880-4dc4-a4de-24561426643c", "address": "fa:16:3e:2e:3a:32", "network": {"id": "01786654-eac7-4d52-bce1-6a98f80c6941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1217512163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65e6bca909aa4dd3ab1eecef7ed2aa09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c2bacc-68", "ovs_interfaceid": "a8c2bacc-6880-4dc4-a4de-24561426643c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-19T20:34:24Z,direct_url=<?>,disk_format='qcow2',id=17b9bce8-a91b-495d-ac33-cf63893413f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='59f01dee51a74ac1a9f82733f591827d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-19T20:34:25Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'encryption_secret_uuid': None, 'image_id': '17b9bce8-a91b-495d-ac33-cf63893413f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.416 188781 WARNING nova.virt.libvirt.driver [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.429 188781 DEBUG nova.compute.manager [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.429 188781 DEBUG nova.network.neutron [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.433 188781 DEBUG nova.virt.libvirt.host [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.434 188781 DEBUG nova.virt.libvirt.host [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.443 188781 DEBUG nova.virt.libvirt.host [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.443 188781 DEBUG nova.virt.libvirt.host [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.444 188781 DEBUG nova.virt.libvirt.driver [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.444 188781 DEBUG nova.virt.hardware [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-19T20:34:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68c4e072-7c2b-48a1-8e07-0fd69e153270',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-19T20:34:24Z,direct_url=<?>,disk_format='qcow2',id=17b9bce8-a91b-495d-ac33-cf63893413f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='59f01dee51a74ac1a9f82733f591827d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-19T20:34:25Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.445 188781 DEBUG nova.virt.hardware [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.445 188781 DEBUG nova.virt.hardware [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.445 188781 DEBUG nova.virt.hardware [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.446 188781 DEBUG nova.virt.hardware [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.446 188781 DEBUG nova.virt.hardware [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.446 188781 DEBUG nova.virt.hardware [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.446 188781 DEBUG nova.virt.hardware [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.447 188781 DEBUG nova.virt.hardware [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.447 188781 DEBUG nova.virt.hardware [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.447 188781 DEBUG nova.virt.hardware [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.451 188781 DEBUG nova.virt.libvirt.vif [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-19T20:35:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1237028603',display_name='tempest-ServersTestJSON-server-1237028603',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1237028603',id=6,image_ref='17b9bce8-a91b-495d-ac33-cf63893413f9',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN6tJI6UCEbI4YMm7Ut3tVcGMmZaGl8BsiTYGl/tCElZtNEZ2yrgIB1FcS/+HnoInFbVW4qmy7nFzGE1z6ZPGV9XAAdWF32PypqPQrQCe9pp/EbRi87hRAVImYrLAxTPRg==',key_name='tempest-keypair-253539883',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='65e6bca909aa4dd3ab1eecef7ed2aa09',ramdisk_id='',reservation_id='r-3jipzoix',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='17b9bce8-a91b-495d-ac33-cf63893413f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1140008513',owner_user_name='tempest-ServersTestJSON-1140008513-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-19T20:35:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1a7ed38b6d6a44dabe6c44e6375b7b29',uuid=7cfaa330-b089-4421-aad5-ee9cdec71c71,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a8c2bacc-6880-4dc4-a4de-24561426643c", "address": "fa:16:3e:2e:3a:32", "network": {"id": "01786654-eac7-4d52-bce1-6a98f80c6941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1217512163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65e6bca909aa4dd3ab1eecef7ed2aa09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c2bacc-68", "ovs_interfaceid": "a8c2bacc-6880-4dc4-a4de-24561426643c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.452 188781 DEBUG nova.network.os_vif_util [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Converting VIF {"id": "a8c2bacc-6880-4dc4-a4de-24561426643c", "address": "fa:16:3e:2e:3a:32", "network": {"id": "01786654-eac7-4d52-bce1-6a98f80c6941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1217512163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65e6bca909aa4dd3ab1eecef7ed2aa09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c2bacc-68", "ovs_interfaceid": "a8c2bacc-6880-4dc4-a4de-24561426643c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.452 188781 DEBUG nova.network.os_vif_util [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:3a:32,bridge_name='br-int',has_traffic_filtering=True,id=a8c2bacc-6880-4dc4-a4de-24561426643c,network=Network(01786654-eac7-4d52-bce1-6a98f80c6941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8c2bacc-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.454 188781 DEBUG nova.objects.instance [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7cfaa330-b089-4421-aad5-ee9cdec71c71 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.461 188781 INFO nova.virt.libvirt.driver [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.476 188781 DEBUG nova.virt.libvirt.driver [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] End _get_guest_xml xml=<domain type="kvm">
Feb 19 20:35:33 compute-0 nova_compute[188777]:   <uuid>7cfaa330-b089-4421-aad5-ee9cdec71c71</uuid>
Feb 19 20:35:33 compute-0 nova_compute[188777]:   <name>instance-00000006</name>
Feb 19 20:35:33 compute-0 nova_compute[188777]:   <memory>131072</memory>
Feb 19 20:35:33 compute-0 nova_compute[188777]:   <vcpu>1</vcpu>
Feb 19 20:35:33 compute-0 nova_compute[188777]:   <metadata>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 19 20:35:33 compute-0 nova_compute[188777]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:       <nova:name>tempest-ServersTestJSON-server-1237028603</nova:name>
Feb 19 20:35:33 compute-0 nova_compute[188777]:       <nova:creationTime>2026-02-19 20:35:33</nova:creationTime>
Feb 19 20:35:33 compute-0 nova_compute[188777]:       <nova:flavor name="m1.nano">
Feb 19 20:35:33 compute-0 nova_compute[188777]:         <nova:memory>128</nova:memory>
Feb 19 20:35:33 compute-0 nova_compute[188777]:         <nova:disk>1</nova:disk>
Feb 19 20:35:33 compute-0 nova_compute[188777]:         <nova:swap>0</nova:swap>
Feb 19 20:35:33 compute-0 nova_compute[188777]:         <nova:ephemeral>0</nova:ephemeral>
Feb 19 20:35:33 compute-0 nova_compute[188777]:         <nova:vcpus>1</nova:vcpus>
Feb 19 20:35:33 compute-0 nova_compute[188777]:       </nova:flavor>
Feb 19 20:35:33 compute-0 nova_compute[188777]:       <nova:owner>
Feb 19 20:35:33 compute-0 nova_compute[188777]:         <nova:user uuid="1a7ed38b6d6a44dabe6c44e6375b7b29">tempest-ServersTestJSON-1140008513-project-member</nova:user>
Feb 19 20:35:33 compute-0 nova_compute[188777]:         <nova:project uuid="65e6bca909aa4dd3ab1eecef7ed2aa09">tempest-ServersTestJSON-1140008513</nova:project>
Feb 19 20:35:33 compute-0 nova_compute[188777]:       </nova:owner>
Feb 19 20:35:33 compute-0 nova_compute[188777]:       <nova:root type="image" uuid="17b9bce8-a91b-495d-ac33-cf63893413f9"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:       <nova:ports>
Feb 19 20:35:33 compute-0 nova_compute[188777]:         <nova:port uuid="a8c2bacc-6880-4dc4-a4de-24561426643c">
Feb 19 20:35:33 compute-0 nova_compute[188777]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:         </nova:port>
Feb 19 20:35:33 compute-0 nova_compute[188777]:       </nova:ports>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     </nova:instance>
Feb 19 20:35:33 compute-0 nova_compute[188777]:   </metadata>
Feb 19 20:35:33 compute-0 nova_compute[188777]:   <sysinfo type="smbios">
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <system>
Feb 19 20:35:33 compute-0 nova_compute[188777]:       <entry name="manufacturer">RDO</entry>
Feb 19 20:35:33 compute-0 nova_compute[188777]:       <entry name="product">OpenStack Compute</entry>
Feb 19 20:35:33 compute-0 nova_compute[188777]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 19 20:35:33 compute-0 nova_compute[188777]:       <entry name="serial">7cfaa330-b089-4421-aad5-ee9cdec71c71</entry>
Feb 19 20:35:33 compute-0 nova_compute[188777]:       <entry name="uuid">7cfaa330-b089-4421-aad5-ee9cdec71c71</entry>
Feb 19 20:35:33 compute-0 nova_compute[188777]:       <entry name="family">Virtual Machine</entry>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     </system>
Feb 19 20:35:33 compute-0 nova_compute[188777]:   </sysinfo>
Feb 19 20:35:33 compute-0 nova_compute[188777]:   <os>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <boot dev="hd"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <smbios mode="sysinfo"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:   </os>
Feb 19 20:35:33 compute-0 nova_compute[188777]:   <features>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <acpi/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <apic/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <vmcoreinfo/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:   </features>
Feb 19 20:35:33 compute-0 nova_compute[188777]:   <clock offset="utc">
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <timer name="pit" tickpolicy="delay"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <timer name="hpet" present="no"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:   </clock>
Feb 19 20:35:33 compute-0 nova_compute[188777]:   <cpu mode="host-model" match="exact">
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <topology sockets="1" cores="1" threads="1"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:   </cpu>
Feb 19 20:35:33 compute-0 nova_compute[188777]:   <devices>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <disk type="file" device="disk">
Feb 19 20:35:33 compute-0 nova_compute[188777]:       <driver name="qemu" type="qcow2" cache="none"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:       <source file="/var/lib/nova/instances/7cfaa330-b089-4421-aad5-ee9cdec71c71/disk"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:       <target dev="vda" bus="virtio"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     </disk>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <disk type="file" device="cdrom">
Feb 19 20:35:33 compute-0 nova_compute[188777]:       <driver name="qemu" type="raw" cache="none"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:       <source file="/var/lib/nova/instances/7cfaa330-b089-4421-aad5-ee9cdec71c71/disk.config"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:       <target dev="sda" bus="sata"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     </disk>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <interface type="ethernet">
Feb 19 20:35:33 compute-0 nova_compute[188777]:       <mac address="fa:16:3e:2e:3a:32"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:       <model type="virtio"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:       <driver name="vhost" rx_queue_size="512"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:       <mtu size="1442"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:       <target dev="tapa8c2bacc-68"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     </interface>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <serial type="pty">
Feb 19 20:35:33 compute-0 nova_compute[188777]:       <log file="/var/lib/nova/instances/7cfaa330-b089-4421-aad5-ee9cdec71c71/console.log" append="off"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     </serial>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <video>
Feb 19 20:35:33 compute-0 nova_compute[188777]:       <model type="virtio"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     </video>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <input type="tablet" bus="usb"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <rng model="virtio">
Feb 19 20:35:33 compute-0 nova_compute[188777]:       <backend model="random">/dev/urandom</backend>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     </rng>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <controller type="usb" index="0"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     <memballoon model="virtio">
Feb 19 20:35:33 compute-0 nova_compute[188777]:       <stats period="10"/>
Feb 19 20:35:33 compute-0 nova_compute[188777]:     </memballoon>
Feb 19 20:35:33 compute-0 nova_compute[188777]:   </devices>
Feb 19 20:35:33 compute-0 nova_compute[188777]: </domain>
Feb 19 20:35:33 compute-0 nova_compute[188777]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.476 188781 DEBUG nova.compute.manager [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Preparing to wait for external event network-vif-plugged-a8c2bacc-6880-4dc4-a4de-24561426643c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.477 188781 DEBUG oslo_concurrency.lockutils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Acquiring lock "7cfaa330-b089-4421-aad5-ee9cdec71c71-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.477 188781 DEBUG oslo_concurrency.lockutils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Lock "7cfaa330-b089-4421-aad5-ee9cdec71c71-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.477 188781 DEBUG oslo_concurrency.lockutils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Lock "7cfaa330-b089-4421-aad5-ee9cdec71c71-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.479 188781 DEBUG nova.virt.libvirt.vif [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-19T20:35:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1237028603',display_name='tempest-ServersTestJSON-server-1237028603',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1237028603',id=6,image_ref='17b9bce8-a91b-495d-ac33-cf63893413f9',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN6tJI6UCEbI4YMm7Ut3tVcGMmZaGl8BsiTYGl/tCElZtNEZ2yrgIB1FcS/+HnoInFbVW4qmy7nFzGE1z6ZPGV9XAAdWF32PypqPQrQCe9pp/EbRi87hRAVImYrLAxTPRg==',key_name='tempest-keypair-253539883',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='65e6bca909aa4dd3ab1eecef7ed2aa09',ramdisk_id='',reservation_id='r-3jipzoix',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='17b9bce8-a91b-495d-ac33-cf63893413f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1140008513',owner_user_name='tempest-ServersTestJSON-1140008513-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-19T20:35:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1a7ed38b6d6a44dabe6c44e6375b7b29',uuid=7cfaa330-b089-4421-aad5-ee9cdec71c71,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a8c2bacc-6880-4dc4-a4de-24561426643c", "address": "fa:16:3e:2e:3a:32", "network": {"id": "01786654-eac7-4d52-bce1-6a98f80c6941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1217512163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65e6bca909aa4dd3ab1eecef7ed2aa09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c2bacc-68", "ovs_interfaceid": "a8c2bacc-6880-4dc4-a4de-24561426643c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.479 188781 DEBUG nova.network.os_vif_util [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Converting VIF {"id": "a8c2bacc-6880-4dc4-a4de-24561426643c", "address": "fa:16:3e:2e:3a:32", "network": {"id": "01786654-eac7-4d52-bce1-6a98f80c6941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1217512163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65e6bca909aa4dd3ab1eecef7ed2aa09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c2bacc-68", "ovs_interfaceid": "a8c2bacc-6880-4dc4-a4de-24561426643c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.480 188781 DEBUG nova.network.os_vif_util [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:3a:32,bridge_name='br-int',has_traffic_filtering=True,id=a8c2bacc-6880-4dc4-a4de-24561426643c,network=Network(01786654-eac7-4d52-bce1-6a98f80c6941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8c2bacc-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.480 188781 DEBUG os_vif [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:3a:32,bridge_name='br-int',has_traffic_filtering=True,id=a8c2bacc-6880-4dc4-a4de-24561426643c,network=Network(01786654-eac7-4d52-bce1-6a98f80c6941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8c2bacc-68') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.483 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.483 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.484 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.484 188781 DEBUG nova.compute.manager [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.491 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.492 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa8c2bacc-68, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.492 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa8c2bacc-68, col_values=(('external_ids', {'iface-id': 'a8c2bacc-6880-4dc4-a4de-24561426643c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2e:3a:32', 'vm-uuid': '7cfaa330-b089-4421-aad5-ee9cdec71c71'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.494 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:33 compute-0 NetworkManager[57033]: <info>  [1771533333.4959] manager: (tapa8c2bacc-68): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.498 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.501 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.503 188781 INFO os_vif [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:3a:32,bridge_name='br-int',has_traffic_filtering=True,id=a8c2bacc-6880-4dc4-a4de-24561426643c,network=Network(01786654-eac7-4d52-bce1-6a98f80c6941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8c2bacc-68')
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.596 188781 DEBUG nova.virt.libvirt.driver [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.596 188781 DEBUG nova.virt.libvirt.driver [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.596 188781 DEBUG nova.virt.libvirt.driver [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] No VIF found with MAC fa:16:3e:2e:3a:32, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.598 188781 INFO nova.virt.libvirt.driver [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Using config drive
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.601 188781 DEBUG nova.compute.manager [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.602 188781 DEBUG nova.virt.libvirt.driver [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.602 188781 INFO nova.virt.libvirt.driver [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Creating image(s)
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.603 188781 DEBUG oslo_concurrency.lockutils [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Acquiring lock "/var/lib/nova/instances/7a56de80-4437-4013-96c2-be1937f088e1/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.603 188781 DEBUG oslo_concurrency.lockutils [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Lock "/var/lib/nova/instances/7a56de80-4437-4013-96c2-be1937f088e1/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.604 188781 DEBUG oslo_concurrency.lockutils [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Lock "/var/lib/nova/instances/7a56de80-4437-4013-96c2-be1937f088e1/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.629 188781 DEBUG oslo_concurrency.processutils [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.677 188781 DEBUG oslo_concurrency.processutils [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.678 188781 DEBUG oslo_concurrency.lockutils [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Acquiring lock "a9fd80910f614000293e8e5ea927829d2f3ef59c" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.679 188781 DEBUG oslo_concurrency.lockutils [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Lock "a9fd80910f614000293e8e5ea927829d2f3ef59c" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.689 188781 DEBUG oslo_concurrency.processutils [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.702 188781 DEBUG nova.policy [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f852f439f2394296a1bd7c9dfc0f03cc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e5dd04de830547fc9be85d60a48c5a31', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.739 188781 DEBUG oslo_concurrency.processutils [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.740 188781 DEBUG oslo_concurrency.processutils [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c,backing_fmt=raw /var/lib/nova/instances/7a56de80-4437-4013-96c2-be1937f088e1/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.790 188781 DEBUG oslo_concurrency.processutils [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c,backing_fmt=raw /var/lib/nova/instances/7a56de80-4437-4013-96c2-be1937f088e1/disk 1073741824" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.791 188781 DEBUG oslo_concurrency.lockutils [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Lock "a9fd80910f614000293e8e5ea927829d2f3ef59c" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.112s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.791 188781 DEBUG oslo_concurrency.processutils [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.837 188781 DEBUG oslo_concurrency.processutils [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.838 188781 DEBUG nova.virt.disk.api [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Checking if we can resize image /var/lib/nova/instances/7a56de80-4437-4013-96c2-be1937f088e1/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.838 188781 DEBUG oslo_concurrency.processutils [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7a56de80-4437-4013-96c2-be1937f088e1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.884 188781 DEBUG oslo_concurrency.processutils [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7a56de80-4437-4013-96c2-be1937f088e1/disk --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.885 188781 DEBUG nova.virt.disk.api [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Cannot resize image /var/lib/nova/instances/7a56de80-4437-4013-96c2-be1937f088e1/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.885 188781 DEBUG nova.objects.instance [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Lazy-loading 'migration_context' on Instance uuid 7a56de80-4437-4013-96c2-be1937f088e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.904 188781 DEBUG nova.virt.libvirt.driver [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.904 188781 DEBUG nova.virt.libvirt.driver [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Ensure instance console log exists: /var/lib/nova/instances/7a56de80-4437-4013-96c2-be1937f088e1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.905 188781 DEBUG oslo_concurrency.lockutils [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.905 188781 DEBUG oslo_concurrency.lockutils [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:35:33 compute-0 nova_compute[188777]: 2026-02-19 20:35:33.905 188781 DEBUG oslo_concurrency.lockutils [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:35:34 compute-0 nova_compute[188777]: 2026-02-19 20:35:34.360 188781 DEBUG nova.network.neutron [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Successfully created port: b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 19 20:35:34 compute-0 nova_compute[188777]: 2026-02-19 20:35:34.938 188781 INFO nova.virt.libvirt.driver [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Creating config drive at /var/lib/nova/instances/7cfaa330-b089-4421-aad5-ee9cdec71c71/disk.config
Feb 19 20:35:34 compute-0 nova_compute[188777]: 2026-02-19 20:35:34.945 188781 DEBUG oslo_concurrency.processutils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7cfaa330-b089-4421-aad5-ee9cdec71c71/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp_j3t4sqn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:35:35 compute-0 nova_compute[188777]: 2026-02-19 20:35:35.065 188781 DEBUG oslo_concurrency.processutils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7cfaa330-b089-4421-aad5-ee9cdec71c71/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp_j3t4sqn" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:35:35 compute-0 kernel: tapa8c2bacc-68: entered promiscuous mode
Feb 19 20:35:35 compute-0 NetworkManager[57033]: <info>  [1771533335.1355] manager: (tapa8c2bacc-68): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Feb 19 20:35:35 compute-0 ovn_controller[98843]: 2026-02-19T20:35:35Z|00066|binding|INFO|Claiming lport a8c2bacc-6880-4dc4-a4de-24561426643c for this chassis.
Feb 19 20:35:35 compute-0 nova_compute[188777]: 2026-02-19 20:35:35.138 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:35 compute-0 ovn_controller[98843]: 2026-02-19T20:35:35Z|00067|binding|INFO|a8c2bacc-6880-4dc4-a4de-24561426643c: Claiming fa:16:3e:2e:3a:32 10.100.0.10
Feb 19 20:35:35 compute-0 nova_compute[188777]: 2026-02-19 20:35:35.145 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:35 compute-0 ovn_controller[98843]: 2026-02-19T20:35:35Z|00068|binding|INFO|Setting lport a8c2bacc-6880-4dc4-a4de-24561426643c ovn-installed in OVS
Feb 19 20:35:35 compute-0 ovn_controller[98843]: 2026-02-19T20:35:35Z|00069|binding|INFO|Setting lport a8c2bacc-6880-4dc4-a4de-24561426643c up in Southbound
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:35.152 108175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:3a:32 10.100.0.10'], port_security=['fa:16:3e:2e:3a:32 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7cfaa330-b089-4421-aad5-ee9cdec71c71', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01786654-eac7-4d52-bce1-6a98f80c6941', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65e6bca909aa4dd3ab1eecef7ed2aa09', 'neutron:revision_number': '2', 'neutron:security_group_ids': '63c5edfd-0baf-46d0-bc2d-21bd65015788', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7e2aeadf-084b-41a0-880d-b0e27a6eeaf2, chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>], logical_port=a8c2bacc-6880-4dc4-a4de-24561426643c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 19 20:35:35 compute-0 nova_compute[188777]: 2026-02-19 20:35:35.155 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:35.153 108175 INFO neutron.agent.ovn.metadata.agent [-] Port a8c2bacc-6880-4dc4-a4de-24561426643c in datapath 01786654-eac7-4d52-bce1-6a98f80c6941 bound to our chassis
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:35.155 108175 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 01786654-eac7-4d52-bce1-6a98f80c6941
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:35.162 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[25d8f763-addc-42ca-8f78-74fd6900c021]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:35.163 108175 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap01786654-e1 in ovnmeta-01786654-eac7-4d52-bce1-6a98f80c6941 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 19 20:35:35 compute-0 systemd-machined[158158]: New machine qemu-6-instance-00000006.
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:35.164 242160 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap01786654-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:35.164 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[21fa873a-e9ab-4c15-a2d7-18697318fa4a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:35.165 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[847c8df5-91ed-4d83-a400-76d2dae8c4b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:35.173 108698 DEBUG oslo.privsep.daemon [-] privsep: reply[d54a57b8-fcfc-4d63-9236-8f5be7bbf21e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:35 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:35.183 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[f081a506-04cd-4b79-8e07-4b59c5c4bf83]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:35 compute-0 systemd-udevd[252269]: Network interface NamePolicy= disabled on kernel command line.
Feb 19 20:35:35 compute-0 NetworkManager[57033]: <info>  [1771533335.1966] device (tapa8c2bacc-68): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 19 20:35:35 compute-0 NetworkManager[57033]: <info>  [1771533335.1973] device (tapa8c2bacc-68): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:35.202 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[1de2385f-0d57-45b8-af0b-dd91747139b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:35 compute-0 NetworkManager[57033]: <info>  [1771533335.2076] manager: (tap01786654-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/35)
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:35.206 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[4f654619-e2c6-4960-8cca-e369c2dbbd29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:35.227 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[c59fb4fd-454e-4c32-accb-9d3c7e724783]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:35.229 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[90a88076-a5ca-456b-8251-e4796f948450]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:35 compute-0 NetworkManager[57033]: <info>  [1771533335.2457] device (tap01786654-e0): carrier: link connected
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:35.249 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[241434e3-08db-48d4-97b3-9471385acf08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:35.265 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[e1de966e-5281-401e-8ef5-f650188967de]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01786654-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:5b:51'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 484671, 'reachable_time': 15746, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252299, 'error': None, 'target': 'ovnmeta-01786654-eac7-4d52-bce1-6a98f80c6941', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:35.277 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[e6bebd2e-e25f-4df5-bbc5-c50b1ed310bd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9e:5b51'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 484671, 'tstamp': 484671}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252301, 'error': None, 'target': 'ovnmeta-01786654-eac7-4d52-bce1-6a98f80c6941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:35.289 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[27f6fac4-4afc-47d6-bbea-99b0f8ea3a34]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01786654-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:5b:51'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 484671, 'reachable_time': 15746, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252302, 'error': None, 'target': 'ovnmeta-01786654-eac7-4d52-bce1-6a98f80c6941', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:35.308 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[4d8c4761-a1cb-4e68-aa43-c9d0dbded2c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:35.350 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[6261fe5e-d234-4b6b-a5b3-68dd9a3d7e18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:35.352 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01786654-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:35.352 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:35.353 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01786654-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:35:35 compute-0 nova_compute[188777]: 2026-02-19 20:35:35.355 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:35 compute-0 NetworkManager[57033]: <info>  [1771533335.3558] manager: (tap01786654-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Feb 19 20:35:35 compute-0 kernel: tap01786654-e0: entered promiscuous mode
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:35.359 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap01786654-e0, col_values=(('external_ids', {'iface-id': 'f8bda42e-82fd-444e-9eec-587fd2a85c15'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:35:35 compute-0 ovn_controller[98843]: 2026-02-19T20:35:35Z|00070|binding|INFO|Releasing lport f8bda42e-82fd-444e-9eec-587fd2a85c15 from this chassis (sb_readonly=0)
Feb 19 20:35:35 compute-0 nova_compute[188777]: 2026-02-19 20:35:35.361 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:35.364 108175 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/01786654-eac7-4d52-bce1-6a98f80c6941.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/01786654-eac7-4d52-bce1-6a98f80c6941.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:35.365 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[392a7af7-336e-44ea-aa35-7b8ef2e8849a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:35 compute-0 nova_compute[188777]: 2026-02-19 20:35:35.363 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:35.366 108175 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]: global
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]:     log         /dev/log local0 debug
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]:     log-tag     haproxy-metadata-proxy-01786654-eac7-4d52-bce1-6a98f80c6941
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]:     user        root
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]:     group       root
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]:     maxconn     1024
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]:     pidfile     /var/lib/neutron/external/pids/01786654-eac7-4d52-bce1-6a98f80c6941.pid.haproxy
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]:     daemon
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]: 
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]: defaults
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]:     log global
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]:     mode http
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]:     option httplog
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]:     option dontlognull
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]:     option http-server-close
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]:     option forwardfor
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]:     retries                 3
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]:     timeout http-request    30s
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]:     timeout connect         30s
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]:     timeout client          32s
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]:     timeout server          32s
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]:     timeout http-keep-alive 30s
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]: 
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]: 
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]: listen listener
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]:     bind 169.254.169.254:80
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]:     server metadata /var/lib/neutron/metadata_proxy
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]:     http-request add-header X-OVN-Network-ID 01786654-eac7-4d52-bce1-6a98f80c6941
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 19 20:35:35 compute-0 nova_compute[188777]: 2026-02-19 20:35:35.369 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:35 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:35.370 108175 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-01786654-eac7-4d52-bce1-6a98f80c6941', 'env', 'PROCESS_TAG=haproxy-01786654-eac7-4d52-bce1-6a98f80c6941', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/01786654-eac7-4d52-bce1-6a98f80c6941.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 19 20:35:35 compute-0 nova_compute[188777]: 2026-02-19 20:35:35.798 188781 DEBUG nova.network.neutron [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Successfully created port: fa073f2b-de2e-4fae-9203-432a59201885 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 19 20:35:35 compute-0 podman[252333]: 2026-02-19 20:35:35.876807204 +0000 UTC m=+0.085957741 container create e903765d830f67f61a79973ba475c51bebb048779921fef9642c7631008cbc28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01786654-eac7-4d52-bce1-6a98f80c6941, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Feb 19 20:35:35 compute-0 podman[252333]: 2026-02-19 20:35:35.815358928 +0000 UTC m=+0.024509495 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 19 20:35:35 compute-0 systemd[1]: Started libpod-conmon-e903765d830f67f61a79973ba475c51bebb048779921fef9642c7631008cbc28.scope.
Feb 19 20:35:35 compute-0 nova_compute[188777]: 2026-02-19 20:35:35.929 188781 DEBUG nova.network.neutron [req-60ea146d-81bc-4e10-886b-491f5cd3b4af req-c4a51392-e520-479e-89c0-c1f7f1c79ee1 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Updated VIF entry in instance network info cache for port a8c2bacc-6880-4dc4-a4de-24561426643c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 19 20:35:35 compute-0 nova_compute[188777]: 2026-02-19 20:35:35.930 188781 DEBUG nova.network.neutron [req-60ea146d-81bc-4e10-886b-491f5cd3b4af req-c4a51392-e520-479e-89c0-c1f7f1c79ee1 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Updating instance_info_cache with network_info: [{"id": "a8c2bacc-6880-4dc4-a4de-24561426643c", "address": "fa:16:3e:2e:3a:32", "network": {"id": "01786654-eac7-4d52-bce1-6a98f80c6941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1217512163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65e6bca909aa4dd3ab1eecef7ed2aa09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c2bacc-68", "ovs_interfaceid": "a8c2bacc-6880-4dc4-a4de-24561426643c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:35:36 compute-0 systemd[1]: Started libcrun container.
Feb 19 20:35:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc72e41e0f1788eeaa2f231550ca7ca21d609b01f4449f7fbc2377e3a688378d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 19 20:35:36 compute-0 nova_compute[188777]: 2026-02-19 20:35:36.535 188781 DEBUG oslo_concurrency.lockutils [req-60ea146d-81bc-4e10-886b-491f5cd3b4af req-c4a51392-e520-479e-89c0-c1f7f1c79ee1 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Releasing lock "refresh_cache-7cfaa330-b089-4421-aad5-ee9cdec71c71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:35:36 compute-0 podman[252333]: 2026-02-19 20:35:36.55454738 +0000 UTC m=+0.763697937 container init e903765d830f67f61a79973ba475c51bebb048779921fef9642c7631008cbc28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01786654-eac7-4d52-bce1-6a98f80c6941, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:35:36 compute-0 podman[252333]: 2026-02-19 20:35:36.560121424 +0000 UTC m=+0.769271961 container start e903765d830f67f61a79973ba475c51bebb048779921fef9642c7631008cbc28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01786654-eac7-4d52-bce1-6a98f80c6941, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Feb 19 20:35:36 compute-0 neutron-haproxy-ovnmeta-01786654-eac7-4d52-bce1-6a98f80c6941[252354]: [NOTICE]   (252374) : New worker (252381) forked
Feb 19 20:35:36 compute-0 neutron-haproxy-ovnmeta-01786654-eac7-4d52-bce1-6a98f80c6941[252354]: [NOTICE]   (252374) : Loading success.
Feb 19 20:35:36 compute-0 podman[252344]: 2026-02-19 20:35:36.605134606 +0000 UTC m=+0.693154647 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Feb 19 20:35:36 compute-0 nova_compute[188777]: 2026-02-19 20:35:36.655 188781 DEBUG nova.virt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Emitting event <LifecycleEvent: 1771533336.654383, 7cfaa330-b089-4421-aad5-ee9cdec71c71 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:35:36 compute-0 nova_compute[188777]: 2026-02-19 20:35:36.655 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] VM Started (Lifecycle Event)
Feb 19 20:35:36 compute-0 nova_compute[188777]: 2026-02-19 20:35:36.682 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:35:36 compute-0 nova_compute[188777]: 2026-02-19 20:35:36.688 188781 DEBUG nova.virt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Emitting event <LifecycleEvent: 1771533336.6545157, 7cfaa330-b089-4421-aad5-ee9cdec71c71 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:35:36 compute-0 nova_compute[188777]: 2026-02-19 20:35:36.689 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] VM Paused (Lifecycle Event)
Feb 19 20:35:36 compute-0 nova_compute[188777]: 2026-02-19 20:35:36.720 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:35:36 compute-0 nova_compute[188777]: 2026-02-19 20:35:36.725 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 19 20:35:36 compute-0 nova_compute[188777]: 2026-02-19 20:35:36.746 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 19 20:35:37 compute-0 nova_compute[188777]: 2026-02-19 20:35:37.101 188781 DEBUG nova.network.neutron [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Successfully updated port: fa073f2b-de2e-4fae-9203-432a59201885 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 19 20:35:37 compute-0 nova_compute[188777]: 2026-02-19 20:35:37.121 188781 DEBUG oslo_concurrency.lockutils [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Acquiring lock "refresh_cache-7a56de80-4437-4013-96c2-be1937f088e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:35:37 compute-0 nova_compute[188777]: 2026-02-19 20:35:37.122 188781 DEBUG oslo_concurrency.lockutils [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Acquired lock "refresh_cache-7a56de80-4437-4013-96c2-be1937f088e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:35:37 compute-0 nova_compute[188777]: 2026-02-19 20:35:37.122 188781 DEBUG nova.network.neutron [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 19 20:35:37 compute-0 nova_compute[188777]: 2026-02-19 20:35:37.149 188781 DEBUG nova.compute.manager [req-4386857b-f670-4a2d-b8e7-f655d43703dd req-061c7ca8-c67c-4b8f-9125-e685ff3d34c6 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Received event network-changed-fa073f2b-de2e-4fae-9203-432a59201885 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:35:37 compute-0 nova_compute[188777]: 2026-02-19 20:35:37.150 188781 DEBUG nova.compute.manager [req-4386857b-f670-4a2d-b8e7-f655d43703dd req-061c7ca8-c67c-4b8f-9125-e685ff3d34c6 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Refreshing instance network info cache due to event network-changed-fa073f2b-de2e-4fae-9203-432a59201885. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 19 20:35:37 compute-0 nova_compute[188777]: 2026-02-19 20:35:37.150 188781 DEBUG oslo_concurrency.lockutils [req-4386857b-f670-4a2d-b8e7-f655d43703dd req-061c7ca8-c67c-4b8f-9125-e685ff3d34c6 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "refresh_cache-7a56de80-4437-4013-96c2-be1937f088e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:35:37 compute-0 nova_compute[188777]: 2026-02-19 20:35:37.734 188781 DEBUG nova.network.neutron [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 19 20:35:38 compute-0 nova_compute[188777]: 2026-02-19 20:35:38.494 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:38 compute-0 nova_compute[188777]: 2026-02-19 20:35:38.718 188781 DEBUG nova.network.neutron [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Successfully updated port: b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 19 20:35:38 compute-0 nova_compute[188777]: 2026-02-19 20:35:38.725 188781 DEBUG nova.compute.manager [req-c28dfcf8-b672-4a44-9f29-b9ab03c69cf9 req-7aab7ec4-6914-4598-9068-3364ecd38f5b 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Received event network-changed-b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:35:38 compute-0 nova_compute[188777]: 2026-02-19 20:35:38.726 188781 DEBUG nova.compute.manager [req-c28dfcf8-b672-4a44-9f29-b9ab03c69cf9 req-7aab7ec4-6914-4598-9068-3364ecd38f5b 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Refreshing instance network info cache due to event network-changed-b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 19 20:35:38 compute-0 nova_compute[188777]: 2026-02-19 20:35:38.726 188781 DEBUG oslo_concurrency.lockutils [req-c28dfcf8-b672-4a44-9f29-b9ab03c69cf9 req-7aab7ec4-6914-4598-9068-3364ecd38f5b 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "refresh_cache-da31f324-38ad-4f77-b724-3ef1628be336" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:35:38 compute-0 nova_compute[188777]: 2026-02-19 20:35:38.726 188781 DEBUG oslo_concurrency.lockutils [req-c28dfcf8-b672-4a44-9f29-b9ab03c69cf9 req-7aab7ec4-6914-4598-9068-3364ecd38f5b 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquired lock "refresh_cache-da31f324-38ad-4f77-b724-3ef1628be336" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:35:38 compute-0 nova_compute[188777]: 2026-02-19 20:35:38.726 188781 DEBUG nova.network.neutron [req-c28dfcf8-b672-4a44-9f29-b9ab03c69cf9 req-7aab7ec4-6914-4598-9068-3364ecd38f5b 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Refreshing network info cache for port b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 19 20:35:38 compute-0 systemd[1]: Starting libvirt proxy daemon...
Feb 19 20:35:38 compute-0 nova_compute[188777]: 2026-02-19 20:35:38.727 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:38 compute-0 nova_compute[188777]: 2026-02-19 20:35:38.749 188781 DEBUG oslo_concurrency.lockutils [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Acquiring lock "refresh_cache-da31f324-38ad-4f77-b724-3ef1628be336" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:35:38 compute-0 systemd[1]: Started libvirt proxy daemon.
Feb 19 20:35:39 compute-0 nova_compute[188777]: 2026-02-19 20:35:39.012 188781 DEBUG nova.network.neutron [req-c28dfcf8-b672-4a44-9f29-b9ab03c69cf9 req-7aab7ec4-6914-4598-9068-3364ecd38f5b 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.829 188781 DEBUG nova.network.neutron [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Updating instance_info_cache with network_info: [{"id": "fa073f2b-de2e-4fae-9203-432a59201885", "address": "fa:16:3e:20:e7:76", "network": {"id": "6cace72e-1722-4ebe-9704-2d9205c01a28", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-273808848-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5dd04de830547fc9be85d60a48c5a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa073f2b-de", "ovs_interfaceid": "fa073f2b-de2e-4fae-9203-432a59201885", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.856 188781 DEBUG oslo_concurrency.lockutils [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Releasing lock "refresh_cache-7a56de80-4437-4013-96c2-be1937f088e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.856 188781 DEBUG nova.compute.manager [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Instance network_info: |[{"id": "fa073f2b-de2e-4fae-9203-432a59201885", "address": "fa:16:3e:20:e7:76", "network": {"id": "6cace72e-1722-4ebe-9704-2d9205c01a28", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-273808848-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5dd04de830547fc9be85d60a48c5a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa073f2b-de", "ovs_interfaceid": "fa073f2b-de2e-4fae-9203-432a59201885", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.857 188781 DEBUG oslo_concurrency.lockutils [req-4386857b-f670-4a2d-b8e7-f655d43703dd req-061c7ca8-c67c-4b8f-9125-e685ff3d34c6 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquired lock "refresh_cache-7a56de80-4437-4013-96c2-be1937f088e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.857 188781 DEBUG nova.network.neutron [req-4386857b-f670-4a2d-b8e7-f655d43703dd req-061c7ca8-c67c-4b8f-9125-e685ff3d34c6 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Refreshing network info cache for port fa073f2b-de2e-4fae-9203-432a59201885 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.860 188781 DEBUG nova.virt.libvirt.driver [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Start _get_guest_xml network_info=[{"id": "fa073f2b-de2e-4fae-9203-432a59201885", "address": "fa:16:3e:20:e7:76", "network": {"id": "6cace72e-1722-4ebe-9704-2d9205c01a28", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-273808848-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5dd04de830547fc9be85d60a48c5a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa073f2b-de", "ovs_interfaceid": "fa073f2b-de2e-4fae-9203-432a59201885", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-19T20:34:24Z,direct_url=<?>,disk_format='qcow2',id=17b9bce8-a91b-495d-ac33-cf63893413f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='59f01dee51a74ac1a9f82733f591827d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-19T20:34:25Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'encryption_secret_uuid': None, 'image_id': '17b9bce8-a91b-495d-ac33-cf63893413f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.866 188781 WARNING nova.virt.libvirt.driver [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.872 188781 DEBUG nova.virt.libvirt.host [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.873 188781 DEBUG nova.virt.libvirt.host [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.880 188781 DEBUG nova.virt.libvirt.host [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.881 188781 DEBUG nova.virt.libvirt.host [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.881 188781 DEBUG nova.virt.libvirt.driver [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.881 188781 DEBUG nova.virt.hardware [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-19T20:34:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68c4e072-7c2b-48a1-8e07-0fd69e153270',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-19T20:34:24Z,direct_url=<?>,disk_format='qcow2',id=17b9bce8-a91b-495d-ac33-cf63893413f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='59f01dee51a74ac1a9f82733f591827d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-19T20:34:25Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.882 188781 DEBUG nova.virt.hardware [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.882 188781 DEBUG nova.virt.hardware [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.883 188781 DEBUG nova.virt.hardware [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.883 188781 DEBUG nova.virt.hardware [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.883 188781 DEBUG nova.virt.hardware [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.884 188781 DEBUG nova.virt.hardware [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.884 188781 DEBUG nova.virt.hardware [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.884 188781 DEBUG nova.virt.hardware [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.885 188781 DEBUG nova.virt.hardware [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.885 188781 DEBUG nova.virt.hardware [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.889 188781 DEBUG nova.virt.libvirt.vif [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-19T20:35:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-2015353785',display_name='tempest-ServersTestManualDisk-server-2015353785',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-2015353785',id=8,image_ref='17b9bce8-a91b-495d-ac33-cf63893413f9',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFXFDQijSANFDoUVi84SuiAfHVF3spIPoYnKYJFGqNIq4PhfTmqJYMCiMcaurCo/ihMWQ4HRhinwq1C8zYZck26Imnue1gKTPXfH2xEFNH/1E/za2XGNR7k+Iye5A44NOw==',key_name='tempest-keypair-99209297',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e5dd04de830547fc9be85d60a48c5a31',ramdisk_id='',reservation_id='r-02xox5bx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='17b9bce8-a91b-495d-ac33-cf63893413f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-377148035',owner_user_name='tempest-ServersTestManualDisk-377148035-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-19T20:35:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f852f439f2394296a1bd7c9dfc0f03cc',uuid=7a56de80-4437-4013-96c2-be1937f088e1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fa073f2b-de2e-4fae-9203-432a59201885", "address": "fa:16:3e:20:e7:76", "network": {"id": "6cace72e-1722-4ebe-9704-2d9205c01a28", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-273808848-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5dd04de830547fc9be85d60a48c5a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa073f2b-de", "ovs_interfaceid": "fa073f2b-de2e-4fae-9203-432a59201885", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.889 188781 DEBUG nova.network.os_vif_util [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Converting VIF {"id": "fa073f2b-de2e-4fae-9203-432a59201885", "address": "fa:16:3e:20:e7:76", "network": {"id": "6cace72e-1722-4ebe-9704-2d9205c01a28", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-273808848-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5dd04de830547fc9be85d60a48c5a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa073f2b-de", "ovs_interfaceid": "fa073f2b-de2e-4fae-9203-432a59201885", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.890 188781 DEBUG nova.network.os_vif_util [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:e7:76,bridge_name='br-int',has_traffic_filtering=True,id=fa073f2b-de2e-4fae-9203-432a59201885,network=Network(6cace72e-1722-4ebe-9704-2d9205c01a28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa073f2b-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.891 188781 DEBUG nova.objects.instance [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7a56de80-4437-4013-96c2-be1937f088e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.914 188781 DEBUG nova.virt.libvirt.driver [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] End _get_guest_xml xml=<domain type="kvm">
Feb 19 20:35:40 compute-0 nova_compute[188777]:   <uuid>7a56de80-4437-4013-96c2-be1937f088e1</uuid>
Feb 19 20:35:40 compute-0 nova_compute[188777]:   <name>instance-00000008</name>
Feb 19 20:35:40 compute-0 nova_compute[188777]:   <memory>131072</memory>
Feb 19 20:35:40 compute-0 nova_compute[188777]:   <vcpu>1</vcpu>
Feb 19 20:35:40 compute-0 nova_compute[188777]:   <metadata>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 19 20:35:40 compute-0 nova_compute[188777]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:       <nova:name>tempest-ServersTestManualDisk-server-2015353785</nova:name>
Feb 19 20:35:40 compute-0 nova_compute[188777]:       <nova:creationTime>2026-02-19 20:35:40</nova:creationTime>
Feb 19 20:35:40 compute-0 nova_compute[188777]:       <nova:flavor name="m1.nano">
Feb 19 20:35:40 compute-0 nova_compute[188777]:         <nova:memory>128</nova:memory>
Feb 19 20:35:40 compute-0 nova_compute[188777]:         <nova:disk>1</nova:disk>
Feb 19 20:35:40 compute-0 nova_compute[188777]:         <nova:swap>0</nova:swap>
Feb 19 20:35:40 compute-0 nova_compute[188777]:         <nova:ephemeral>0</nova:ephemeral>
Feb 19 20:35:40 compute-0 nova_compute[188777]:         <nova:vcpus>1</nova:vcpus>
Feb 19 20:35:40 compute-0 nova_compute[188777]:       </nova:flavor>
Feb 19 20:35:40 compute-0 nova_compute[188777]:       <nova:owner>
Feb 19 20:35:40 compute-0 nova_compute[188777]:         <nova:user uuid="f852f439f2394296a1bd7c9dfc0f03cc">tempest-ServersTestManualDisk-377148035-project-member</nova:user>
Feb 19 20:35:40 compute-0 nova_compute[188777]:         <nova:project uuid="e5dd04de830547fc9be85d60a48c5a31">tempest-ServersTestManualDisk-377148035</nova:project>
Feb 19 20:35:40 compute-0 nova_compute[188777]:       </nova:owner>
Feb 19 20:35:40 compute-0 nova_compute[188777]:       <nova:root type="image" uuid="17b9bce8-a91b-495d-ac33-cf63893413f9"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:       <nova:ports>
Feb 19 20:35:40 compute-0 nova_compute[188777]:         <nova:port uuid="fa073f2b-de2e-4fae-9203-432a59201885">
Feb 19 20:35:40 compute-0 nova_compute[188777]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:         </nova:port>
Feb 19 20:35:40 compute-0 nova_compute[188777]:       </nova:ports>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     </nova:instance>
Feb 19 20:35:40 compute-0 nova_compute[188777]:   </metadata>
Feb 19 20:35:40 compute-0 nova_compute[188777]:   <sysinfo type="smbios">
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <system>
Feb 19 20:35:40 compute-0 nova_compute[188777]:       <entry name="manufacturer">RDO</entry>
Feb 19 20:35:40 compute-0 nova_compute[188777]:       <entry name="product">OpenStack Compute</entry>
Feb 19 20:35:40 compute-0 nova_compute[188777]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 19 20:35:40 compute-0 nova_compute[188777]:       <entry name="serial">7a56de80-4437-4013-96c2-be1937f088e1</entry>
Feb 19 20:35:40 compute-0 nova_compute[188777]:       <entry name="uuid">7a56de80-4437-4013-96c2-be1937f088e1</entry>
Feb 19 20:35:40 compute-0 nova_compute[188777]:       <entry name="family">Virtual Machine</entry>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     </system>
Feb 19 20:35:40 compute-0 nova_compute[188777]:   </sysinfo>
Feb 19 20:35:40 compute-0 nova_compute[188777]:   <os>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <boot dev="hd"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <smbios mode="sysinfo"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:   </os>
Feb 19 20:35:40 compute-0 nova_compute[188777]:   <features>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <acpi/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <apic/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <vmcoreinfo/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:   </features>
Feb 19 20:35:40 compute-0 nova_compute[188777]:   <clock offset="utc">
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <timer name="pit" tickpolicy="delay"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <timer name="hpet" present="no"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:   </clock>
Feb 19 20:35:40 compute-0 nova_compute[188777]:   <cpu mode="host-model" match="exact">
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <topology sockets="1" cores="1" threads="1"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:   </cpu>
Feb 19 20:35:40 compute-0 nova_compute[188777]:   <devices>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <disk type="file" device="disk">
Feb 19 20:35:40 compute-0 nova_compute[188777]:       <driver name="qemu" type="qcow2" cache="none"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:       <source file="/var/lib/nova/instances/7a56de80-4437-4013-96c2-be1937f088e1/disk"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:       <target dev="vda" bus="virtio"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     </disk>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <disk type="file" device="cdrom">
Feb 19 20:35:40 compute-0 nova_compute[188777]:       <driver name="qemu" type="raw" cache="none"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:       <source file="/var/lib/nova/instances/7a56de80-4437-4013-96c2-be1937f088e1/disk.config"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:       <target dev="sda" bus="sata"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     </disk>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <interface type="ethernet">
Feb 19 20:35:40 compute-0 nova_compute[188777]:       <mac address="fa:16:3e:20:e7:76"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:       <model type="virtio"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:       <driver name="vhost" rx_queue_size="512"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:       <mtu size="1442"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:       <target dev="tapfa073f2b-de"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     </interface>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <serial type="pty">
Feb 19 20:35:40 compute-0 nova_compute[188777]:       <log file="/var/lib/nova/instances/7a56de80-4437-4013-96c2-be1937f088e1/console.log" append="off"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     </serial>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <video>
Feb 19 20:35:40 compute-0 nova_compute[188777]:       <model type="virtio"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     </video>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <input type="tablet" bus="usb"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <rng model="virtio">
Feb 19 20:35:40 compute-0 nova_compute[188777]:       <backend model="random">/dev/urandom</backend>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     </rng>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <controller type="usb" index="0"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     <memballoon model="virtio">
Feb 19 20:35:40 compute-0 nova_compute[188777]:       <stats period="10"/>
Feb 19 20:35:40 compute-0 nova_compute[188777]:     </memballoon>
Feb 19 20:35:40 compute-0 nova_compute[188777]:   </devices>
Feb 19 20:35:40 compute-0 nova_compute[188777]: </domain>
Feb 19 20:35:40 compute-0 nova_compute[188777]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.915 188781 DEBUG nova.compute.manager [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Preparing to wait for external event network-vif-plugged-fa073f2b-de2e-4fae-9203-432a59201885 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.916 188781 DEBUG oslo_concurrency.lockutils [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Acquiring lock "7a56de80-4437-4013-96c2-be1937f088e1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.916 188781 DEBUG oslo_concurrency.lockutils [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Lock "7a56de80-4437-4013-96c2-be1937f088e1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.916 188781 DEBUG oslo_concurrency.lockutils [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Lock "7a56de80-4437-4013-96c2-be1937f088e1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.917 188781 DEBUG nova.virt.libvirt.vif [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-19T20:35:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-2015353785',display_name='tempest-ServersTestManualDisk-server-2015353785',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-2015353785',id=8,image_ref='17b9bce8-a91b-495d-ac33-cf63893413f9',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFXFDQijSANFDoUVi84SuiAfHVF3spIPoYnKYJFGqNIq4PhfTmqJYMCiMcaurCo/ihMWQ4HRhinwq1C8zYZck26Imnue1gKTPXfH2xEFNH/1E/za2XGNR7k+Iye5A44NOw==',key_name='tempest-keypair-99209297',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e5dd04de830547fc9be85d60a48c5a31',ramdisk_id='',reservation_id='r-02xox5bx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='17b9bce8-a91b-495d-ac33-cf63893413f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-377148035',owner_user_name='tempest-ServersTestManualDisk-377148035-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-19T20:35:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f852f439f2394296a1bd7c9dfc0f03cc',uuid=7a56de80-4437-4013-96c2-be1937f088e1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fa073f2b-de2e-4fae-9203-432a59201885", "address": "fa:16:3e:20:e7:76", "network": {"id": "6cace72e-1722-4ebe-9704-2d9205c01a28", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-273808848-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5dd04de830547fc9be85d60a48c5a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa073f2b-de", "ovs_interfaceid": "fa073f2b-de2e-4fae-9203-432a59201885", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.918 188781 DEBUG nova.network.os_vif_util [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Converting VIF {"id": "fa073f2b-de2e-4fae-9203-432a59201885", "address": "fa:16:3e:20:e7:76", "network": {"id": "6cace72e-1722-4ebe-9704-2d9205c01a28", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-273808848-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5dd04de830547fc9be85d60a48c5a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa073f2b-de", "ovs_interfaceid": "fa073f2b-de2e-4fae-9203-432a59201885", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.918 188781 DEBUG nova.network.os_vif_util [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:e7:76,bridge_name='br-int',has_traffic_filtering=True,id=fa073f2b-de2e-4fae-9203-432a59201885,network=Network(6cace72e-1722-4ebe-9704-2d9205c01a28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa073f2b-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.919 188781 DEBUG os_vif [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:e7:76,bridge_name='br-int',has_traffic_filtering=True,id=fa073f2b-de2e-4fae-9203-432a59201885,network=Network(6cace72e-1722-4ebe-9704-2d9205c01a28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa073f2b-de') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.919 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.920 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.920 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.923 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.923 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa073f2b-de, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.924 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfa073f2b-de, col_values=(('external_ids', {'iface-id': 'fa073f2b-de2e-4fae-9203-432a59201885', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:20:e7:76', 'vm-uuid': '7a56de80-4437-4013-96c2-be1937f088e1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:35:40 compute-0 NetworkManager[57033]: <info>  [1771533340.9266] manager: (tapfa073f2b-de): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.925 188781 DEBUG nova.network.neutron [req-c28dfcf8-b672-4a44-9f29-b9ab03c69cf9 req-7aab7ec4-6914-4598-9068-3364ecd38f5b 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.927 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.928 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.932 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.933 188781 INFO os_vif [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:e7:76,bridge_name='br-int',has_traffic_filtering=True,id=fa073f2b-de2e-4fae-9203-432a59201885,network=Network(6cace72e-1722-4ebe-9704-2d9205c01a28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa073f2b-de')
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.946 188781 DEBUG oslo_concurrency.lockutils [req-c28dfcf8-b672-4a44-9f29-b9ab03c69cf9 req-7aab7ec4-6914-4598-9068-3364ecd38f5b 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Releasing lock "refresh_cache-da31f324-38ad-4f77-b724-3ef1628be336" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.946 188781 DEBUG oslo_concurrency.lockutils [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Acquired lock "refresh_cache-da31f324-38ad-4f77-b724-3ef1628be336" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.947 188781 DEBUG nova.network.neutron [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.992 188781 DEBUG nova.virt.libvirt.driver [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.992 188781 DEBUG nova.virt.libvirt.driver [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.992 188781 DEBUG nova.virt.libvirt.driver [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] No VIF found with MAC fa:16:3e:20:e7:76, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 19 20:35:40 compute-0 nova_compute[188777]: 2026-02-19 20:35:40.993 188781 INFO nova.virt.libvirt.driver [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Using config drive
Feb 19 20:35:41 compute-0 nova_compute[188777]: 2026-02-19 20:35:41.463 188781 DEBUG nova.network.neutron [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 19 20:35:41 compute-0 nova_compute[188777]: 2026-02-19 20:35:41.743 188781 INFO nova.virt.libvirt.driver [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Creating config drive at /var/lib/nova/instances/7a56de80-4437-4013-96c2-be1937f088e1/disk.config
Feb 19 20:35:41 compute-0 nova_compute[188777]: 2026-02-19 20:35:41.747 188781 DEBUG oslo_concurrency.processutils [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7a56de80-4437-4013-96c2-be1937f088e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpbcv_qk4i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:35:41 compute-0 nova_compute[188777]: 2026-02-19 20:35:41.863 188781 DEBUG oslo_concurrency.processutils [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7a56de80-4437-4013-96c2-be1937f088e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpbcv_qk4i" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:35:41 compute-0 kernel: tapfa073f2b-de: entered promiscuous mode
Feb 19 20:35:41 compute-0 NetworkManager[57033]: <info>  [1771533341.9173] manager: (tapfa073f2b-de): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Feb 19 20:35:41 compute-0 nova_compute[188777]: 2026-02-19 20:35:41.919 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:41 compute-0 ovn_controller[98843]: 2026-02-19T20:35:41Z|00071|binding|INFO|Claiming lport fa073f2b-de2e-4fae-9203-432a59201885 for this chassis.
Feb 19 20:35:41 compute-0 ovn_controller[98843]: 2026-02-19T20:35:41Z|00072|binding|INFO|fa073f2b-de2e-4fae-9203-432a59201885: Claiming fa:16:3e:20:e7:76 10.100.0.14
Feb 19 20:35:41 compute-0 ovn_controller[98843]: 2026-02-19T20:35:41Z|00073|binding|INFO|Setting lport fa073f2b-de2e-4fae-9203-432a59201885 ovn-installed in OVS
Feb 19 20:35:41 compute-0 nova_compute[188777]: 2026-02-19 20:35:41.927 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:41 compute-0 nova_compute[188777]: 2026-02-19 20:35:41.930 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:41 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:41.930 108175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:e7:76 10.100.0.14'], port_security=['fa:16:3e:20:e7:76 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '7a56de80-4437-4013-96c2-be1937f088e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6cace72e-1722-4ebe-9704-2d9205c01a28', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e5dd04de830547fc9be85d60a48c5a31', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4860b750-9676-4d87-b85b-18d91151e966', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=120fdf81-6f33-4b0c-bcc6-c0f7b8146b65, chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>], logical_port=fa073f2b-de2e-4fae-9203-432a59201885) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 19 20:35:41 compute-0 nova_compute[188777]: 2026-02-19 20:35:41.932 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:41 compute-0 ovn_controller[98843]: 2026-02-19T20:35:41Z|00074|binding|INFO|Setting lport fa073f2b-de2e-4fae-9203-432a59201885 up in Southbound
Feb 19 20:35:41 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:41.932 108175 INFO neutron.agent.ovn.metadata.agent [-] Port fa073f2b-de2e-4fae-9203-432a59201885 in datapath 6cace72e-1722-4ebe-9704-2d9205c01a28 bound to our chassis
Feb 19 20:35:41 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:41.934 108175 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6cace72e-1722-4ebe-9704-2d9205c01a28
Feb 19 20:35:41 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:41.946 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[24d42783-e38a-40f4-99f8-a6b2b7914317]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:41 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:41.947 108175 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6cace72e-11 in ovnmeta-6cace72e-1722-4ebe-9704-2d9205c01a28 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 19 20:35:41 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:41.948 242160 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6cace72e-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 19 20:35:41 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:41.949 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[94124d09-184a-4855-971a-73ee8ce544e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:41 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:41.950 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[633e4aba-1c71-4ab5-9c7f-852774ebacca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:41 compute-0 systemd-machined[158158]: New machine qemu-7-instance-00000008.
Feb 19 20:35:41 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:41.958 108698 DEBUG oslo.privsep.daemon [-] privsep: reply[6ca0ea50-62ed-4d51-a605-09a8ca2589c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:41 compute-0 systemd-udevd[252436]: Network interface NamePolicy= disabled on kernel command line.
Feb 19 20:35:41 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:41.969 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[445b1895-23ca-4933-a75d-6e86f8477698]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:41 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000008.
Feb 19 20:35:41 compute-0 NetworkManager[57033]: <info>  [1771533341.9760] device (tapfa073f2b-de): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 19 20:35:41 compute-0 NetworkManager[57033]: <info>  [1771533341.9794] device (tapfa073f2b-de): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 19 20:35:41 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:41.995 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[0f673243-49a3-4101-bc81-863f9d196ba1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:42.002 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[6bd5a18f-78ff-4b3e-bd7d-a0144e7c0c4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:42 compute-0 NetworkManager[57033]: <info>  [1771533342.0051] manager: (tap6cace72e-10): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:42.024 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[791ce626-7217-4e46-a654-8e6d2614e6b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:42.028 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[b89b8e39-463d-43a5-ab63-20ec91bb53a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:42 compute-0 NetworkManager[57033]: <info>  [1771533342.0477] device (tap6cace72e-10): carrier: link connected
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:42.050 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[7a233fac-a636-49a4-989c-e0326b5adbcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:42.064 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[a85fd774-840e-4769-b8b3-5e1496e357b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6cace72e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:2c:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 485351, 'reachable_time': 26148, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252467, 'error': None, 'target': 'ovnmeta-6cace72e-1722-4ebe-9704-2d9205c01a28', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:42.074 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[c565346f-e027-43e0-a07d-33e98e7e191b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feed:2c50'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 485351, 'tstamp': 485351}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252468, 'error': None, 'target': 'ovnmeta-6cace72e-1722-4ebe-9704-2d9205c01a28', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:42.087 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[709b31d4-4722-4604-8a5e-f2546bef35d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6cace72e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:2c:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 485351, 'reachable_time': 26148, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252469, 'error': None, 'target': 'ovnmeta-6cace72e-1722-4ebe-9704-2d9205c01a28', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:42.111 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[e8726f8d-834a-44b2-9e4e-cb08a864e684]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:42.158 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[29a13123-17e0-4bbd-8ab9-652d8381a846]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:42.160 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6cace72e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:42.161 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:42.162 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6cace72e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:35:42 compute-0 kernel: tap6cace72e-10: entered promiscuous mode
Feb 19 20:35:42 compute-0 NetworkManager[57033]: <info>  [1771533342.1655] manager: (tap6cace72e-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:42.167 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6cace72e-10, col_values=(('external_ids', {'iface-id': 'f2e95930-8476-4984-abcc-447ec31e474b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:35:42 compute-0 ovn_controller[98843]: 2026-02-19T20:35:42Z|00075|binding|INFO|Releasing lport f2e95930-8476-4984-abcc-447ec31e474b from this chassis (sb_readonly=0)
Feb 19 20:35:42 compute-0 nova_compute[188777]: 2026-02-19 20:35:42.166 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:42.171 108175 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6cace72e-1722-4ebe-9704-2d9205c01a28.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6cace72e-1722-4ebe-9704-2d9205c01a28.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:42.172 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[8c77e45e-7631-49e2-831d-6d1fe690e47a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:42.173 108175 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]: global
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]:     log         /dev/log local0 debug
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]:     log-tag     haproxy-metadata-proxy-6cace72e-1722-4ebe-9704-2d9205c01a28
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]:     user        root
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]:     group       root
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]:     maxconn     1024
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]:     pidfile     /var/lib/neutron/external/pids/6cace72e-1722-4ebe-9704-2d9205c01a28.pid.haproxy
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]:     daemon
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]: 
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]: defaults
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]:     log global
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]:     mode http
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]:     option httplog
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]:     option dontlognull
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]:     option http-server-close
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]:     option forwardfor
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]:     retries                 3
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]:     timeout http-request    30s
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]:     timeout connect         30s
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]:     timeout client          32s
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]:     timeout server          32s
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]:     timeout http-keep-alive 30s
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]: 
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]: 
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]: listen listener
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]:     bind 169.254.169.254:80
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]:     server metadata /var/lib/neutron/metadata_proxy
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]:     http-request add-header X-OVN-Network-ID 6cace72e-1722-4ebe-9704-2d9205c01a28
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 19 20:35:42 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:42.174 108175 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6cace72e-1722-4ebe-9704-2d9205c01a28', 'env', 'PROCESS_TAG=haproxy-6cace72e-1722-4ebe-9704-2d9205c01a28', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6cace72e-1722-4ebe-9704-2d9205c01a28.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 19 20:35:42 compute-0 nova_compute[188777]: 2026-02-19 20:35:42.182 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:42 compute-0 ovn_controller[98843]: 2026-02-19T20:35:42Z|00076|binding|INFO|Releasing lport f8bda42e-82fd-444e-9eec-587fd2a85c15 from this chassis (sb_readonly=0)
Feb 19 20:35:42 compute-0 ovn_controller[98843]: 2026-02-19T20:35:42Z|00077|binding|INFO|Releasing lport f2e95930-8476-4984-abcc-447ec31e474b from this chassis (sb_readonly=0)
Feb 19 20:35:42 compute-0 nova_compute[188777]: 2026-02-19 20:35:42.306 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:42 compute-0 ovn_controller[98843]: 2026-02-19T20:35:42Z|00078|binding|INFO|Releasing lport f8bda42e-82fd-444e-9eec-587fd2a85c15 from this chassis (sb_readonly=0)
Feb 19 20:35:42 compute-0 ovn_controller[98843]: 2026-02-19T20:35:42Z|00079|binding|INFO|Releasing lport f2e95930-8476-4984-abcc-447ec31e474b from this chassis (sb_readonly=0)
Feb 19 20:35:42 compute-0 nova_compute[188777]: 2026-02-19 20:35:42.348 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:42 compute-0 podman[252501]: 2026-02-19 20:35:42.542448464 +0000 UTC m=+0.066245367 container create 603d4b4281975919ca4cf7a41d1393703f06630dce3531d7f4ce422bdc6f83c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6cace72e-1722-4ebe-9704-2d9205c01a28, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 19 20:35:42 compute-0 systemd[1]: Started libpod-conmon-603d4b4281975919ca4cf7a41d1393703f06630dce3531d7f4ce422bdc6f83c1.scope.
Feb 19 20:35:42 compute-0 podman[252501]: 2026-02-19 20:35:42.504254972 +0000 UTC m=+0.028051775 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 19 20:35:42 compute-0 systemd[1]: Started libcrun container.
Feb 19 20:35:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/759cb1784f6b9967fd8801acda01c83d35ea9bdbed6995261fff0531acdd20a5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 19 20:35:42 compute-0 podman[252501]: 2026-02-19 20:35:42.628389252 +0000 UTC m=+0.152186065 container init 603d4b4281975919ca4cf7a41d1393703f06630dce3531d7f4ce422bdc6f83c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6cace72e-1722-4ebe-9704-2d9205c01a28, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 19 20:35:42 compute-0 podman[252501]: 2026-02-19 20:35:42.633921125 +0000 UTC m=+0.157717918 container start 603d4b4281975919ca4cf7a41d1393703f06630dce3531d7f4ce422bdc6f83c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6cace72e-1722-4ebe-9704-2d9205c01a28, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 19 20:35:42 compute-0 neutron-haproxy-ovnmeta-6cace72e-1722-4ebe-9704-2d9205c01a28[252517]: [NOTICE]   (252526) : New worker (252528) forked
Feb 19 20:35:42 compute-0 neutron-haproxy-ovnmeta-6cace72e-1722-4ebe-9704-2d9205c01a28[252517]: [NOTICE]   (252526) : Loading success.
Feb 19 20:35:42 compute-0 nova_compute[188777]: 2026-02-19 20:35:42.658 188781 DEBUG nova.network.neutron [req-4386857b-f670-4a2d-b8e7-f655d43703dd req-061c7ca8-c67c-4b8f-9125-e685ff3d34c6 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Updated VIF entry in instance network info cache for port fa073f2b-de2e-4fae-9203-432a59201885. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 19 20:35:42 compute-0 nova_compute[188777]: 2026-02-19 20:35:42.658 188781 DEBUG nova.network.neutron [req-4386857b-f670-4a2d-b8e7-f655d43703dd req-061c7ca8-c67c-4b8f-9125-e685ff3d34c6 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Updating instance_info_cache with network_info: [{"id": "fa073f2b-de2e-4fae-9203-432a59201885", "address": "fa:16:3e:20:e7:76", "network": {"id": "6cace72e-1722-4ebe-9704-2d9205c01a28", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-273808848-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5dd04de830547fc9be85d60a48c5a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa073f2b-de", "ovs_interfaceid": "fa073f2b-de2e-4fae-9203-432a59201885", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:35:42 compute-0 nova_compute[188777]: 2026-02-19 20:35:42.662 188781 DEBUG nova.compute.manager [req-b897c3ec-1b65-4b33-b839-7d06886e3fcc req-b3db2b02-2a2f-4983-920d-603f4b3fd8d7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Received event network-vif-plugged-fa073f2b-de2e-4fae-9203-432a59201885 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:35:42 compute-0 nova_compute[188777]: 2026-02-19 20:35:42.662 188781 DEBUG oslo_concurrency.lockutils [req-b897c3ec-1b65-4b33-b839-7d06886e3fcc req-b3db2b02-2a2f-4983-920d-603f4b3fd8d7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "7a56de80-4437-4013-96c2-be1937f088e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:35:42 compute-0 nova_compute[188777]: 2026-02-19 20:35:42.663 188781 DEBUG oslo_concurrency.lockutils [req-b897c3ec-1b65-4b33-b839-7d06886e3fcc req-b3db2b02-2a2f-4983-920d-603f4b3fd8d7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "7a56de80-4437-4013-96c2-be1937f088e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:35:42 compute-0 nova_compute[188777]: 2026-02-19 20:35:42.663 188781 DEBUG oslo_concurrency.lockutils [req-b897c3ec-1b65-4b33-b839-7d06886e3fcc req-b3db2b02-2a2f-4983-920d-603f4b3fd8d7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "7a56de80-4437-4013-96c2-be1937f088e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:35:42 compute-0 nova_compute[188777]: 2026-02-19 20:35:42.663 188781 DEBUG nova.compute.manager [req-b897c3ec-1b65-4b33-b839-7d06886e3fcc req-b3db2b02-2a2f-4983-920d-603f4b3fd8d7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Processing event network-vif-plugged-fa073f2b-de2e-4fae-9203-432a59201885 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 19 20:35:42 compute-0 nova_compute[188777]: 2026-02-19 20:35:42.698 188781 DEBUG nova.compute.manager [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 19 20:35:42 compute-0 nova_compute[188777]: 2026-02-19 20:35:42.699 188781 DEBUG nova.virt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Emitting event <LifecycleEvent: 1771533342.6982677, 7a56de80-4437-4013-96c2-be1937f088e1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:35:42 compute-0 nova_compute[188777]: 2026-02-19 20:35:42.700 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] VM Started (Lifecycle Event)
Feb 19 20:35:42 compute-0 nova_compute[188777]: 2026-02-19 20:35:42.703 188781 DEBUG oslo_concurrency.lockutils [req-4386857b-f670-4a2d-b8e7-f655d43703dd req-061c7ca8-c67c-4b8f-9125-e685ff3d34c6 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Releasing lock "refresh_cache-7a56de80-4437-4013-96c2-be1937f088e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:35:42 compute-0 nova_compute[188777]: 2026-02-19 20:35:42.705 188781 DEBUG nova.virt.libvirt.driver [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 19 20:35:42 compute-0 nova_compute[188777]: 2026-02-19 20:35:42.710 188781 INFO nova.virt.libvirt.driver [-] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Instance spawned successfully.
Feb 19 20:35:42 compute-0 nova_compute[188777]: 2026-02-19 20:35:42.711 188781 DEBUG nova.virt.libvirt.driver [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 19 20:35:42 compute-0 nova_compute[188777]: 2026-02-19 20:35:42.719 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:35:42 compute-0 nova_compute[188777]: 2026-02-19 20:35:42.724 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 19 20:35:42 compute-0 nova_compute[188777]: 2026-02-19 20:35:42.731 188781 DEBUG nova.virt.libvirt.driver [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:35:42 compute-0 nova_compute[188777]: 2026-02-19 20:35:42.732 188781 DEBUG nova.virt.libvirt.driver [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:35:42 compute-0 nova_compute[188777]: 2026-02-19 20:35:42.733 188781 DEBUG nova.virt.libvirt.driver [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:35:42 compute-0 nova_compute[188777]: 2026-02-19 20:35:42.733 188781 DEBUG nova.virt.libvirt.driver [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:35:42 compute-0 nova_compute[188777]: 2026-02-19 20:35:42.734 188781 DEBUG nova.virt.libvirt.driver [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:35:42 compute-0 nova_compute[188777]: 2026-02-19 20:35:42.734 188781 DEBUG nova.virt.libvirt.driver [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:35:42 compute-0 nova_compute[188777]: 2026-02-19 20:35:42.762 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 19 20:35:42 compute-0 nova_compute[188777]: 2026-02-19 20:35:42.763 188781 DEBUG nova.virt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Emitting event <LifecycleEvent: 1771533342.6984324, 7a56de80-4437-4013-96c2-be1937f088e1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:35:42 compute-0 nova_compute[188777]: 2026-02-19 20:35:42.763 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] VM Paused (Lifecycle Event)
Feb 19 20:35:42 compute-0 nova_compute[188777]: 2026-02-19 20:35:42.806 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:35:42 compute-0 nova_compute[188777]: 2026-02-19 20:35:42.813 188781 DEBUG nova.virt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Emitting event <LifecycleEvent: 1771533342.7044337, 7a56de80-4437-4013-96c2-be1937f088e1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:35:42 compute-0 nova_compute[188777]: 2026-02-19 20:35:42.814 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] VM Resumed (Lifecycle Event)
Feb 19 20:35:42 compute-0 nova_compute[188777]: 2026-02-19 20:35:42.817 188781 INFO nova.compute.manager [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Took 9.22 seconds to spawn the instance on the hypervisor.
Feb 19 20:35:42 compute-0 nova_compute[188777]: 2026-02-19 20:35:42.817 188781 DEBUG nova.compute.manager [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:35:42 compute-0 nova_compute[188777]: 2026-02-19 20:35:42.845 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:35:42 compute-0 nova_compute[188777]: 2026-02-19 20:35:42.851 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 19 20:35:42 compute-0 nova_compute[188777]: 2026-02-19 20:35:42.896 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 19 20:35:42 compute-0 nova_compute[188777]: 2026-02-19 20:35:42.921 188781 INFO nova.compute.manager [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Took 9.81 seconds to build instance.
Feb 19 20:35:42 compute-0 nova_compute[188777]: 2026-02-19 20:35:42.936 188781 DEBUG oslo_concurrency.lockutils [None req-5735c3b3-a59e-453f-b109-8c7f9be48eb4 f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Lock "7a56de80-4437-4013-96c2-be1937f088e1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.499 188781 DEBUG nova.network.neutron [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Updating instance_info_cache with network_info: [{"id": "b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2", "address": "fa:16:3e:c6:08:9f", "network": {"id": "d02e853c-7c37-4c12-a959-0da0ff097734", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-432434488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c8b3e035bb347acad9c4027457ee296", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9a6ef82-e3", "ovs_interfaceid": "b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.526 188781 DEBUG oslo_concurrency.lockutils [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Releasing lock "refresh_cache-da31f324-38ad-4f77-b724-3ef1628be336" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.527 188781 DEBUG nova.compute.manager [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Instance network_info: |[{"id": "b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2", "address": "fa:16:3e:c6:08:9f", "network": {"id": "d02e853c-7c37-4c12-a959-0da0ff097734", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-432434488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c8b3e035bb347acad9c4027457ee296", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9a6ef82-e3", "ovs_interfaceid": "b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.530 188781 DEBUG nova.virt.libvirt.driver [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Start _get_guest_xml network_info=[{"id": "b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2", "address": "fa:16:3e:c6:08:9f", "network": {"id": "d02e853c-7c37-4c12-a959-0da0ff097734", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-432434488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c8b3e035bb347acad9c4027457ee296", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9a6ef82-e3", "ovs_interfaceid": "b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-19T20:34:24Z,direct_url=<?>,disk_format='qcow2',id=17b9bce8-a91b-495d-ac33-cf63893413f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='59f01dee51a74ac1a9f82733f591827d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-19T20:34:25Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'encryption_secret_uuid': None, 'image_id': '17b9bce8-a91b-495d-ac33-cf63893413f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.537 188781 WARNING nova.virt.libvirt.driver [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.544 188781 DEBUG nova.virt.libvirt.host [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.545 188781 DEBUG nova.virt.libvirt.host [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.550 188781 DEBUG nova.virt.libvirt.host [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.551 188781 DEBUG nova.virt.libvirt.host [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.551 188781 DEBUG nova.virt.libvirt.driver [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.551 188781 DEBUG nova.virt.hardware [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-19T20:34:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68c4e072-7c2b-48a1-8e07-0fd69e153270',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-19T20:34:24Z,direct_url=<?>,disk_format='qcow2',id=17b9bce8-a91b-495d-ac33-cf63893413f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='59f01dee51a74ac1a9f82733f591827d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-19T20:34:25Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.552 188781 DEBUG nova.virt.hardware [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.552 188781 DEBUG nova.virt.hardware [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.553 188781 DEBUG nova.virt.hardware [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.553 188781 DEBUG nova.virt.hardware [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.553 188781 DEBUG nova.virt.hardware [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.554 188781 DEBUG nova.virt.hardware [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.554 188781 DEBUG nova.virt.hardware [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.554 188781 DEBUG nova.virt.hardware [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.554 188781 DEBUG nova.virt.hardware [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.555 188781 DEBUG nova.virt.hardware [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.558 188781 DEBUG nova.virt.libvirt.vif [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-19T20:35:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-541687296',display_name='tempest-ServerActionsTestJSON-server-541687296',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-541687296',id=7,image_ref='17b9bce8-a91b-495d-ac33-cf63893413f9',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBGUXfqYgLpnaK0US8EwHfzCPv+m8vpQ+fPWU8q/hF6l9cNu9x6P14aljSv28A+SD7n7yEsgSHzHQXS8tsguQqzUZEu4v3AxpVAXh2tIOAWxaA3uNPd6KcWlT+WQySBOhg==',key_name='tempest-keypair-175997513',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3c8b3e035bb347acad9c4027457ee296',ramdisk_id='',reservation_id='r-a1krnbi0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='17b9bce8-a91b-495d-ac33-cf63893413f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1818290169',owner_user_name='tempest-ServerActionsTestJSON-1818290169-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-19T20:35:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='43931603bc9f40eab8e548129d4c50cb',uuid=da31f324-38ad-4f77-b724-3ef1628be336,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2", "address": "fa:16:3e:c6:08:9f", "network": {"id": "d02e853c-7c37-4c12-a959-0da0ff097734", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-432434488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c8b3e035bb347acad9c4027457ee296", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9a6ef82-e3", "ovs_interfaceid": "b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.559 188781 DEBUG nova.network.os_vif_util [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Converting VIF {"id": "b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2", "address": "fa:16:3e:c6:08:9f", "network": {"id": "d02e853c-7c37-4c12-a959-0da0ff097734", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-432434488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c8b3e035bb347acad9c4027457ee296", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9a6ef82-e3", "ovs_interfaceid": "b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.560 188781 DEBUG nova.network.os_vif_util [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:08:9f,bridge_name='br-int',has_traffic_filtering=True,id=b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2,network=Network(d02e853c-7c37-4c12-a959-0da0ff097734),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9a6ef82-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.561 188781 DEBUG nova.objects.instance [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Lazy-loading 'pci_devices' on Instance uuid da31f324-38ad-4f77-b724-3ef1628be336 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.574 188781 DEBUG nova.virt.libvirt.driver [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] End _get_guest_xml xml=<domain type="kvm">
Feb 19 20:35:43 compute-0 nova_compute[188777]:   <uuid>da31f324-38ad-4f77-b724-3ef1628be336</uuid>
Feb 19 20:35:43 compute-0 nova_compute[188777]:   <name>instance-00000007</name>
Feb 19 20:35:43 compute-0 nova_compute[188777]:   <memory>131072</memory>
Feb 19 20:35:43 compute-0 nova_compute[188777]:   <vcpu>1</vcpu>
Feb 19 20:35:43 compute-0 nova_compute[188777]:   <metadata>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 19 20:35:43 compute-0 nova_compute[188777]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:       <nova:name>tempest-ServerActionsTestJSON-server-541687296</nova:name>
Feb 19 20:35:43 compute-0 nova_compute[188777]:       <nova:creationTime>2026-02-19 20:35:43</nova:creationTime>
Feb 19 20:35:43 compute-0 nova_compute[188777]:       <nova:flavor name="m1.nano">
Feb 19 20:35:43 compute-0 nova_compute[188777]:         <nova:memory>128</nova:memory>
Feb 19 20:35:43 compute-0 nova_compute[188777]:         <nova:disk>1</nova:disk>
Feb 19 20:35:43 compute-0 nova_compute[188777]:         <nova:swap>0</nova:swap>
Feb 19 20:35:43 compute-0 nova_compute[188777]:         <nova:ephemeral>0</nova:ephemeral>
Feb 19 20:35:43 compute-0 nova_compute[188777]:         <nova:vcpus>1</nova:vcpus>
Feb 19 20:35:43 compute-0 nova_compute[188777]:       </nova:flavor>
Feb 19 20:35:43 compute-0 nova_compute[188777]:       <nova:owner>
Feb 19 20:35:43 compute-0 nova_compute[188777]:         <nova:user uuid="43931603bc9f40eab8e548129d4c50cb">tempest-ServerActionsTestJSON-1818290169-project-member</nova:user>
Feb 19 20:35:43 compute-0 nova_compute[188777]:         <nova:project uuid="3c8b3e035bb347acad9c4027457ee296">tempest-ServerActionsTestJSON-1818290169</nova:project>
Feb 19 20:35:43 compute-0 nova_compute[188777]:       </nova:owner>
Feb 19 20:35:43 compute-0 nova_compute[188777]:       <nova:root type="image" uuid="17b9bce8-a91b-495d-ac33-cf63893413f9"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:       <nova:ports>
Feb 19 20:35:43 compute-0 nova_compute[188777]:         <nova:port uuid="b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2">
Feb 19 20:35:43 compute-0 nova_compute[188777]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:         </nova:port>
Feb 19 20:35:43 compute-0 nova_compute[188777]:       </nova:ports>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     </nova:instance>
Feb 19 20:35:43 compute-0 nova_compute[188777]:   </metadata>
Feb 19 20:35:43 compute-0 nova_compute[188777]:   <sysinfo type="smbios">
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <system>
Feb 19 20:35:43 compute-0 nova_compute[188777]:       <entry name="manufacturer">RDO</entry>
Feb 19 20:35:43 compute-0 nova_compute[188777]:       <entry name="product">OpenStack Compute</entry>
Feb 19 20:35:43 compute-0 nova_compute[188777]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 19 20:35:43 compute-0 nova_compute[188777]:       <entry name="serial">da31f324-38ad-4f77-b724-3ef1628be336</entry>
Feb 19 20:35:43 compute-0 nova_compute[188777]:       <entry name="uuid">da31f324-38ad-4f77-b724-3ef1628be336</entry>
Feb 19 20:35:43 compute-0 nova_compute[188777]:       <entry name="family">Virtual Machine</entry>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     </system>
Feb 19 20:35:43 compute-0 nova_compute[188777]:   </sysinfo>
Feb 19 20:35:43 compute-0 nova_compute[188777]:   <os>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <boot dev="hd"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <smbios mode="sysinfo"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:   </os>
Feb 19 20:35:43 compute-0 nova_compute[188777]:   <features>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <acpi/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <apic/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <vmcoreinfo/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:   </features>
Feb 19 20:35:43 compute-0 nova_compute[188777]:   <clock offset="utc">
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <timer name="pit" tickpolicy="delay"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <timer name="hpet" present="no"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:   </clock>
Feb 19 20:35:43 compute-0 nova_compute[188777]:   <cpu mode="host-model" match="exact">
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <topology sockets="1" cores="1" threads="1"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:   </cpu>
Feb 19 20:35:43 compute-0 nova_compute[188777]:   <devices>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <disk type="file" device="disk">
Feb 19 20:35:43 compute-0 nova_compute[188777]:       <driver name="qemu" type="qcow2" cache="none"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:       <source file="/var/lib/nova/instances/da31f324-38ad-4f77-b724-3ef1628be336/disk"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:       <target dev="vda" bus="virtio"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     </disk>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <disk type="file" device="cdrom">
Feb 19 20:35:43 compute-0 nova_compute[188777]:       <driver name="qemu" type="raw" cache="none"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:       <source file="/var/lib/nova/instances/da31f324-38ad-4f77-b724-3ef1628be336/disk.config"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:       <target dev="sda" bus="sata"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     </disk>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <interface type="ethernet">
Feb 19 20:35:43 compute-0 nova_compute[188777]:       <mac address="fa:16:3e:c6:08:9f"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:       <model type="virtio"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:       <driver name="vhost" rx_queue_size="512"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:       <mtu size="1442"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:       <target dev="tapb9a6ef82-e3"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     </interface>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <serial type="pty">
Feb 19 20:35:43 compute-0 nova_compute[188777]:       <log file="/var/lib/nova/instances/da31f324-38ad-4f77-b724-3ef1628be336/console.log" append="off"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     </serial>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <video>
Feb 19 20:35:43 compute-0 nova_compute[188777]:       <model type="virtio"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     </video>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <input type="tablet" bus="usb"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <rng model="virtio">
Feb 19 20:35:43 compute-0 nova_compute[188777]:       <backend model="random">/dev/urandom</backend>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     </rng>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <controller type="usb" index="0"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     <memballoon model="virtio">
Feb 19 20:35:43 compute-0 nova_compute[188777]:       <stats period="10"/>
Feb 19 20:35:43 compute-0 nova_compute[188777]:     </memballoon>
Feb 19 20:35:43 compute-0 nova_compute[188777]:   </devices>
Feb 19 20:35:43 compute-0 nova_compute[188777]: </domain>
Feb 19 20:35:43 compute-0 nova_compute[188777]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.575 188781 DEBUG nova.compute.manager [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Preparing to wait for external event network-vif-plugged-b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.575 188781 DEBUG oslo_concurrency.lockutils [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Acquiring lock "da31f324-38ad-4f77-b724-3ef1628be336-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.576 188781 DEBUG oslo_concurrency.lockutils [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Lock "da31f324-38ad-4f77-b724-3ef1628be336-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.576 188781 DEBUG oslo_concurrency.lockutils [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Lock "da31f324-38ad-4f77-b724-3ef1628be336-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.577 188781 DEBUG nova.virt.libvirt.vif [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-19T20:35:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-541687296',display_name='tempest-ServerActionsTestJSON-server-541687296',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-541687296',id=7,image_ref='17b9bce8-a91b-495d-ac33-cf63893413f9',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBGUXfqYgLpnaK0US8EwHfzCPv+m8vpQ+fPWU8q/hF6l9cNu9x6P14aljSv28A+SD7n7yEsgSHzHQXS8tsguQqzUZEu4v3AxpVAXh2tIOAWxaA3uNPd6KcWlT+WQySBOhg==',key_name='tempest-keypair-175997513',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3c8b3e035bb347acad9c4027457ee296',ramdisk_id='',reservation_id='r-a1krnbi0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='17b9bce8-a91b-495d-ac33-cf63893413f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1818290169',owner_user_name='tempest-ServerActionsTestJSON-1818290169-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-19T20:35:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='43931603bc9f40eab8e548129d4c50cb',uuid=da31f324-38ad-4f77-b724-3ef1628be336,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2", "address": "fa:16:3e:c6:08:9f", "network": {"id": "d02e853c-7c37-4c12-a959-0da0ff097734", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-432434488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c8b3e035bb347acad9c4027457ee296", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9a6ef82-e3", "ovs_interfaceid": "b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.577 188781 DEBUG nova.network.os_vif_util [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Converting VIF {"id": "b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2", "address": "fa:16:3e:c6:08:9f", "network": {"id": "d02e853c-7c37-4c12-a959-0da0ff097734", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-432434488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c8b3e035bb347acad9c4027457ee296", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9a6ef82-e3", "ovs_interfaceid": "b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.578 188781 DEBUG nova.network.os_vif_util [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:08:9f,bridge_name='br-int',has_traffic_filtering=True,id=b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2,network=Network(d02e853c-7c37-4c12-a959-0da0ff097734),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9a6ef82-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.578 188781 DEBUG os_vif [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:08:9f,bridge_name='br-int',has_traffic_filtering=True,id=b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2,network=Network(d02e853c-7c37-4c12-a959-0da0ff097734),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9a6ef82-e3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.579 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.579 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.579 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.583 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.583 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb9a6ef82-e3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.584 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb9a6ef82-e3, col_values=(('external_ids', {'iface-id': 'b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c6:08:9f', 'vm-uuid': 'da31f324-38ad-4f77-b724-3ef1628be336'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.586 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:43 compute-0 NetworkManager[57033]: <info>  [1771533343.5882] manager: (tapb9a6ef82-e3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.588 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.593 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.594 188781 INFO os_vif [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:08:9f,bridge_name='br-int',has_traffic_filtering=True,id=b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2,network=Network(d02e853c-7c37-4c12-a959-0da0ff097734),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9a6ef82-e3')
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.644 188781 DEBUG nova.virt.libvirt.driver [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.645 188781 DEBUG nova.virt.libvirt.driver [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.645 188781 DEBUG nova.virt.libvirt.driver [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] No VIF found with MAC fa:16:3e:c6:08:9f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.646 188781 INFO nova.virt.libvirt.driver [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Using config drive
Feb 19 20:35:43 compute-0 nova_compute[188777]: 2026-02-19 20:35:43.719 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:44 compute-0 nova_compute[188777]: 2026-02-19 20:35:44.086 188781 INFO nova.virt.libvirt.driver [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Creating config drive at /var/lib/nova/instances/da31f324-38ad-4f77-b724-3ef1628be336/disk.config
Feb 19 20:35:44 compute-0 nova_compute[188777]: 2026-02-19 20:35:44.091 188781 DEBUG oslo_concurrency.processutils [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/da31f324-38ad-4f77-b724-3ef1628be336/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp0huwrk4x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:35:44 compute-0 nova_compute[188777]: 2026-02-19 20:35:44.208 188781 DEBUG oslo_concurrency.processutils [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/da31f324-38ad-4f77-b724-3ef1628be336/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp0huwrk4x" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:35:44 compute-0 kernel: tapb9a6ef82-e3: entered promiscuous mode
Feb 19 20:35:44 compute-0 NetworkManager[57033]: <info>  [1771533344.2614] manager: (tapb9a6ef82-e3): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Feb 19 20:35:44 compute-0 ovn_controller[98843]: 2026-02-19T20:35:44Z|00080|binding|INFO|Claiming lport b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 for this chassis.
Feb 19 20:35:44 compute-0 ovn_controller[98843]: 2026-02-19T20:35:44Z|00081|binding|INFO|b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2: Claiming fa:16:3e:c6:08:9f 10.100.0.13
Feb 19 20:35:44 compute-0 nova_compute[188777]: 2026-02-19 20:35:44.265 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:44 compute-0 NetworkManager[57033]: <info>  [1771533344.2823] device (tapb9a6ef82-e3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 19 20:35:44 compute-0 NetworkManager[57033]: <info>  [1771533344.2876] device (tapb9a6ef82-e3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 19 20:35:44 compute-0 nova_compute[188777]: 2026-02-19 20:35:44.287 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:44 compute-0 ovn_controller[98843]: 2026-02-19T20:35:44Z|00082|binding|INFO|Setting lport b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 ovn-installed in OVS
Feb 19 20:35:44 compute-0 nova_compute[188777]: 2026-02-19 20:35:44.295 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:44 compute-0 systemd-machined[158158]: New machine qemu-8-instance-00000007.
Feb 19 20:35:44 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000007.
Feb 19 20:35:44 compute-0 ovn_controller[98843]: 2026-02-19T20:35:44Z|00083|binding|INFO|Setting lport b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 up in Southbound
Feb 19 20:35:44 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:44.926 108175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:08:9f 10.100.0.13'], port_security=['fa:16:3e:c6:08:9f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'da31f324-38ad-4f77-b724-3ef1628be336', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d02e853c-7c37-4c12-a959-0da0ff097734', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3c8b3e035bb347acad9c4027457ee296', 'neutron:revision_number': '2', 'neutron:security_group_ids': '745eb45a-1fad-4b86-be2d-ed9c647c807b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5262953e-25bb-44de-850c-ced354d0d447, chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>], logical_port=b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 19 20:35:44 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:44.930 108175 INFO neutron.agent.ovn.metadata.agent [-] Port b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 in datapath d02e853c-7c37-4c12-a959-0da0ff097734 bound to our chassis
Feb 19 20:35:44 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:44.933 108175 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d02e853c-7c37-4c12-a959-0da0ff097734
Feb 19 20:35:44 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:44.944 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[1b731943-477d-48e7-8bd0-6ad6e5cd999a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:44 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:44.946 108175 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd02e853c-71 in ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 19 20:35:44 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:44.948 242160 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd02e853c-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 19 20:35:44 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:44.948 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[02836993-86b5-4f07-a1de-53b75bb50d66]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:44 compute-0 nova_compute[188777]: 2026-02-19 20:35:44.955 188781 DEBUG nova.virt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Emitting event <LifecycleEvent: 1771533344.954405, da31f324-38ad-4f77-b724-3ef1628be336 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:35:44 compute-0 nova_compute[188777]: 2026-02-19 20:35:44.956 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: da31f324-38ad-4f77-b724-3ef1628be336] VM Started (Lifecycle Event)
Feb 19 20:35:44 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:44.952 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[4bb22c20-6827-4211-86e3-6f16dbf0748e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:44 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:44.968 108698 DEBUG oslo.privsep.daemon [-] privsep: reply[b0343f17-d96c-4cae-9278-06763cca5a03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:44 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:44.989 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[9b416bd4-b154-4bef-942d-8384250f356b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:45.012 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[a557b0d3-1fa5-4726-9205-74039b7d02f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:45.019 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[503db50b-1eca-4283-9cff-4a8ea103d53b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:45 compute-0 NetworkManager[57033]: <info>  [1771533345.0243] manager: (tapd02e853c-70): new Veth device (/org/freedesktop/NetworkManager/Devices/43)
Feb 19 20:35:45 compute-0 nova_compute[188777]: 2026-02-19 20:35:45.040 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:35:45 compute-0 nova_compute[188777]: 2026-02-19 20:35:45.049 188781 DEBUG nova.virt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Emitting event <LifecycleEvent: 1771533344.956477, da31f324-38ad-4f77-b724-3ef1628be336 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:35:45 compute-0 nova_compute[188777]: 2026-02-19 20:35:45.049 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: da31f324-38ad-4f77-b724-3ef1628be336] VM Paused (Lifecycle Event)
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:45.050 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[5e1118da-9812-41b3-86c7-07fc02bed8bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:45.053 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[19ce8558-81d1-4c10-b4ed-df6fe80b945b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:45 compute-0 systemd-udevd[252579]: Network interface NamePolicy= disabled on kernel command line.
Feb 19 20:35:45 compute-0 nova_compute[188777]: 2026-02-19 20:35:45.063 188781 DEBUG nova.compute.manager [req-fdabc750-7f08-4d5a-8a3f-dcf8376763da req-6e24aea6-d789-4e5a-852c-7ae6cb03a85a 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Received event network-vif-plugged-fa073f2b-de2e-4fae-9203-432a59201885 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:35:45 compute-0 nova_compute[188777]: 2026-02-19 20:35:45.064 188781 DEBUG oslo_concurrency.lockutils [req-fdabc750-7f08-4d5a-8a3f-dcf8376763da req-6e24aea6-d789-4e5a-852c-7ae6cb03a85a 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "7a56de80-4437-4013-96c2-be1937f088e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:35:45 compute-0 nova_compute[188777]: 2026-02-19 20:35:45.064 188781 DEBUG oslo_concurrency.lockutils [req-fdabc750-7f08-4d5a-8a3f-dcf8376763da req-6e24aea6-d789-4e5a-852c-7ae6cb03a85a 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "7a56de80-4437-4013-96c2-be1937f088e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:35:45 compute-0 nova_compute[188777]: 2026-02-19 20:35:45.064 188781 DEBUG oslo_concurrency.lockutils [req-fdabc750-7f08-4d5a-8a3f-dcf8376763da req-6e24aea6-d789-4e5a-852c-7ae6cb03a85a 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "7a56de80-4437-4013-96c2-be1937f088e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:35:45 compute-0 nova_compute[188777]: 2026-02-19 20:35:45.065 188781 DEBUG nova.compute.manager [req-fdabc750-7f08-4d5a-8a3f-dcf8376763da req-6e24aea6-d789-4e5a-852c-7ae6cb03a85a 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] No waiting events found dispatching network-vif-plugged-fa073f2b-de2e-4fae-9203-432a59201885 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 19 20:35:45 compute-0 nova_compute[188777]: 2026-02-19 20:35:45.065 188781 WARNING nova.compute.manager [req-fdabc750-7f08-4d5a-8a3f-dcf8376763da req-6e24aea6-d789-4e5a-852c-7ae6cb03a85a 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Received unexpected event network-vif-plugged-fa073f2b-de2e-4fae-9203-432a59201885 for instance with vm_state active and task_state None.
Feb 19 20:35:45 compute-0 nova_compute[188777]: 2026-02-19 20:35:45.070 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:35:45 compute-0 nova_compute[188777]: 2026-02-19 20:35:45.074 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 19 20:35:45 compute-0 NetworkManager[57033]: <info>  [1771533345.0855] device (tapd02e853c-70): carrier: link connected
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:45.088 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[2f0865cf-5b65-4d24-a78d-0f8bf4c8ed0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:45.106 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[082cc0fe-ce7b-474c-9ff6-f8553f5107a8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd02e853c-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:2a:35'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 485654, 'reachable_time': 37379, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252598, 'error': None, 'target': 'ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:45 compute-0 nova_compute[188777]: 2026-02-19 20:35:45.113 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: da31f324-38ad-4f77-b724-3ef1628be336] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:45.121 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[8a24209b-557a-45db-99b2-2967a2d96119]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2a:2a35'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 485654, 'tstamp': 485654}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252599, 'error': None, 'target': 'ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:45.136 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[59b646d2-c6e2-4ad2-8752-7c550ca93a84]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd02e853c-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:2a:35'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 485654, 'reachable_time': 37379, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252600, 'error': None, 'target': 'ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:45.159 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[75a669cd-9c54-485a-8f44-1b3bfc942c78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:45.202 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[0bc797e9-33fc-48cc-88a3-9b3dcdaee380]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:45.203 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd02e853c-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:45.204 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:45.205 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd02e853c-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:35:45 compute-0 kernel: tapd02e853c-70: entered promiscuous mode
Feb 19 20:35:45 compute-0 nova_compute[188777]: 2026-02-19 20:35:45.207 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:45 compute-0 NetworkManager[57033]: <info>  [1771533345.2076] manager: (tapd02e853c-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:45.212 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd02e853c-70, col_values=(('external_ids', {'iface-id': 'c4f25fb9-c5df-4323-a436-ca67d28f2bc3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:35:45 compute-0 nova_compute[188777]: 2026-02-19 20:35:45.214 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:45 compute-0 ovn_controller[98843]: 2026-02-19T20:35:45Z|00084|binding|INFO|Releasing lport c4f25fb9-c5df-4323-a436-ca67d28f2bc3 from this chassis (sb_readonly=0)
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:45.217 108175 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d02e853c-7c37-4c12-a959-0da0ff097734.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d02e853c-7c37-4c12-a959-0da0ff097734.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 19 20:35:45 compute-0 nova_compute[188777]: 2026-02-19 20:35:45.219 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:45.218 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[3429a242-c8f5-4230-ba56-70c0fcaf75c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:45.220 108175 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]: global
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]:     log         /dev/log local0 debug
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]:     log-tag     haproxy-metadata-proxy-d02e853c-7c37-4c12-a959-0da0ff097734
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]:     user        root
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]:     group       root
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]:     maxconn     1024
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]:     pidfile     /var/lib/neutron/external/pids/d02e853c-7c37-4c12-a959-0da0ff097734.pid.haproxy
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]:     daemon
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]: 
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]: defaults
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]:     log global
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]:     mode http
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]:     option httplog
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]:     option dontlognull
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]:     option http-server-close
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]:     option forwardfor
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]:     retries                 3
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]:     timeout http-request    30s
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]:     timeout connect         30s
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]:     timeout client          32s
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]:     timeout server          32s
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]:     timeout http-keep-alive 30s
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]: 
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]: 
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]: listen listener
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]:     bind 169.254.169.254:80
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]:     server metadata /var/lib/neutron/metadata_proxy
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]:     http-request add-header X-OVN-Network-ID d02e853c-7c37-4c12-a959-0da0ff097734
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 19 20:35:45 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:45.222 108175 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734', 'env', 'PROCESS_TAG=haproxy-d02e853c-7c37-4c12-a959-0da0ff097734', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d02e853c-7c37-4c12-a959-0da0ff097734.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 19 20:35:45 compute-0 nova_compute[188777]: 2026-02-19 20:35:45.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:35:45 compute-0 nova_compute[188777]: 2026-02-19 20:35:45.265 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:35:45 compute-0 podman[252630]: 2026-02-19 20:35:45.618538451 +0000 UTC m=+0.056658797 container create 9eb6c2b6e3c5a5dbee6f9b6e5df3d9dec9d8eeeb8ff2063dfc30db9f43503bf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 20:35:45 compute-0 systemd[1]: Started libpod-conmon-9eb6c2b6e3c5a5dbee6f9b6e5df3d9dec9d8eeeb8ff2063dfc30db9f43503bf5.scope.
Feb 19 20:35:45 compute-0 podman[252630]: 2026-02-19 20:35:45.586236474 +0000 UTC m=+0.024356850 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 19 20:35:45 compute-0 systemd[1]: Started libcrun container.
Feb 19 20:35:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4188f91374d22aefb111b16ec42944fa6f82fc968b49fa4da9404352f73dda6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 19 20:35:45 compute-0 podman[252630]: 2026-02-19 20:35:45.70766819 +0000 UTC m=+0.145788546 container init 9eb6c2b6e3c5a5dbee6f9b6e5df3d9dec9d8eeeb8ff2063dfc30db9f43503bf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 19 20:35:45 compute-0 podman[252630]: 2026-02-19 20:35:45.718215918 +0000 UTC m=+0.156336254 container start 9eb6c2b6e3c5a5dbee6f9b6e5df3d9dec9d8eeeb8ff2063dfc30db9f43503bf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:35:45 compute-0 neutron-haproxy-ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734[252645]: [NOTICE]   (252649) : New worker (252651) forked
Feb 19 20:35:45 compute-0 neutron-haproxy-ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734[252645]: [NOTICE]   (252649) : Loading success.
Feb 19 20:35:45 compute-0 nova_compute[188777]: 2026-02-19 20:35:45.967 188781 DEBUG nova.compute.manager [req-c753d1e8-216b-40e0-bdc6-c4b05c706f9c req-9875ca33-2bbf-4d86-a3b8-1d3b672997d6 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Received event network-vif-plugged-a8c2bacc-6880-4dc4-a4de-24561426643c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:35:45 compute-0 nova_compute[188777]: 2026-02-19 20:35:45.968 188781 DEBUG oslo_concurrency.lockutils [req-c753d1e8-216b-40e0-bdc6-c4b05c706f9c req-9875ca33-2bbf-4d86-a3b8-1d3b672997d6 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "7cfaa330-b089-4421-aad5-ee9cdec71c71-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:35:45 compute-0 nova_compute[188777]: 2026-02-19 20:35:45.968 188781 DEBUG oslo_concurrency.lockutils [req-c753d1e8-216b-40e0-bdc6-c4b05c706f9c req-9875ca33-2bbf-4d86-a3b8-1d3b672997d6 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "7cfaa330-b089-4421-aad5-ee9cdec71c71-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:35:45 compute-0 nova_compute[188777]: 2026-02-19 20:35:45.969 188781 DEBUG oslo_concurrency.lockutils [req-c753d1e8-216b-40e0-bdc6-c4b05c706f9c req-9875ca33-2bbf-4d86-a3b8-1d3b672997d6 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "7cfaa330-b089-4421-aad5-ee9cdec71c71-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:35:45 compute-0 nova_compute[188777]: 2026-02-19 20:35:45.970 188781 DEBUG nova.compute.manager [req-c753d1e8-216b-40e0-bdc6-c4b05c706f9c req-9875ca33-2bbf-4d86-a3b8-1d3b672997d6 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Processing event network-vif-plugged-a8c2bacc-6880-4dc4-a4de-24561426643c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 19 20:35:45 compute-0 nova_compute[188777]: 2026-02-19 20:35:45.972 188781 DEBUG nova.compute.manager [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Instance event wait completed in 9 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 19 20:35:45 compute-0 nova_compute[188777]: 2026-02-19 20:35:45.979 188781 DEBUG nova.virt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Emitting event <LifecycleEvent: 1771533345.9795387, 7cfaa330-b089-4421-aad5-ee9cdec71c71 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:35:45 compute-0 nova_compute[188777]: 2026-02-19 20:35:45.980 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] VM Resumed (Lifecycle Event)
Feb 19 20:35:45 compute-0 nova_compute[188777]: 2026-02-19 20:35:45.982 188781 DEBUG nova.virt.libvirt.driver [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 19 20:35:45 compute-0 nova_compute[188777]: 2026-02-19 20:35:45.987 188781 INFO nova.virt.libvirt.driver [-] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Instance spawned successfully.
Feb 19 20:35:45 compute-0 nova_compute[188777]: 2026-02-19 20:35:45.988 188781 DEBUG nova.virt.libvirt.driver [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 19 20:35:46 compute-0 nova_compute[188777]: 2026-02-19 20:35:46.010 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:35:46 compute-0 nova_compute[188777]: 2026-02-19 20:35:46.018 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 19 20:35:46 compute-0 nova_compute[188777]: 2026-02-19 20:35:46.022 188781 DEBUG nova.virt.libvirt.driver [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:35:46 compute-0 nova_compute[188777]: 2026-02-19 20:35:46.023 188781 DEBUG nova.virt.libvirt.driver [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:35:46 compute-0 nova_compute[188777]: 2026-02-19 20:35:46.023 188781 DEBUG nova.virt.libvirt.driver [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:35:46 compute-0 nova_compute[188777]: 2026-02-19 20:35:46.024 188781 DEBUG nova.virt.libvirt.driver [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:35:46 compute-0 nova_compute[188777]: 2026-02-19 20:35:46.025 188781 DEBUG nova.virt.libvirt.driver [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:35:46 compute-0 nova_compute[188777]: 2026-02-19 20:35:46.025 188781 DEBUG nova.virt.libvirt.driver [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:35:46 compute-0 nova_compute[188777]: 2026-02-19 20:35:46.061 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 19 20:35:46 compute-0 nova_compute[188777]: 2026-02-19 20:35:46.098 188781 INFO nova.compute.manager [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Took 20.25 seconds to spawn the instance on the hypervisor.
Feb 19 20:35:46 compute-0 nova_compute[188777]: 2026-02-19 20:35:46.099 188781 DEBUG nova.compute.manager [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:35:46 compute-0 nova_compute[188777]: 2026-02-19 20:35:46.178 188781 INFO nova.compute.manager [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Took 20.86 seconds to build instance.
Feb 19 20:35:46 compute-0 nova_compute[188777]: 2026-02-19 20:35:46.201 188781 DEBUG oslo_concurrency.lockutils [None req-7dd0ed4f-5248-43de-b655-cbeab8a7dbb2 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Lock "7cfaa330-b089-4421-aad5-ee9cdec71c71" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.980s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:35:46 compute-0 nova_compute[188777]: 2026-02-19 20:35:46.202 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "7cfaa330-b089-4421-aad5-ee9cdec71c71" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 18.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:35:46 compute-0 nova_compute[188777]: 2026-02-19 20:35:46.203 188781 INFO nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 19 20:35:46 compute-0 nova_compute[188777]: 2026-02-19 20:35:46.203 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "7cfaa330-b089-4421-aad5-ee9cdec71c71" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:35:47 compute-0 NetworkManager[57033]: <info>  [1771533347.3303] manager: (patch-br-int-to-provnet-17fea16b-f680-4035-8a9a-3c1f8510dc9d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Feb 19 20:35:47 compute-0 nova_compute[188777]: 2026-02-19 20:35:47.329 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:47 compute-0 NetworkManager[57033]: <info>  [1771533347.3319] manager: (patch-provnet-17fea16b-f680-4035-8a9a-3c1f8510dc9d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Feb 19 20:35:47 compute-0 nova_compute[188777]: 2026-02-19 20:35:47.354 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:47 compute-0 ovn_controller[98843]: 2026-02-19T20:35:47Z|00085|binding|INFO|Releasing lport f8bda42e-82fd-444e-9eec-587fd2a85c15 from this chassis (sb_readonly=0)
Feb 19 20:35:47 compute-0 ovn_controller[98843]: 2026-02-19T20:35:47Z|00086|binding|INFO|Releasing lport c4f25fb9-c5df-4323-a436-ca67d28f2bc3 from this chassis (sb_readonly=0)
Feb 19 20:35:47 compute-0 ovn_controller[98843]: 2026-02-19T20:35:47Z|00087|binding|INFO|Releasing lport f2e95930-8476-4984-abcc-447ec31e474b from this chassis (sb_readonly=0)
Feb 19 20:35:47 compute-0 nova_compute[188777]: 2026-02-19 20:35:47.379 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:48 compute-0 nova_compute[188777]: 2026-02-19 20:35:48.304 188781 DEBUG nova.compute.manager [req-ad00fc52-7b4f-4e90-a096-40a06d6452f4 req-ec4b8e58-34e6-4232-9500-467c9bd7ccd1 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Received event network-changed-fa073f2b-de2e-4fae-9203-432a59201885 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:35:48 compute-0 nova_compute[188777]: 2026-02-19 20:35:48.305 188781 DEBUG nova.compute.manager [req-ad00fc52-7b4f-4e90-a096-40a06d6452f4 req-ec4b8e58-34e6-4232-9500-467c9bd7ccd1 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Refreshing instance network info cache due to event network-changed-fa073f2b-de2e-4fae-9203-432a59201885. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 19 20:35:48 compute-0 nova_compute[188777]: 2026-02-19 20:35:48.305 188781 DEBUG oslo_concurrency.lockutils [req-ad00fc52-7b4f-4e90-a096-40a06d6452f4 req-ec4b8e58-34e6-4232-9500-467c9bd7ccd1 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "refresh_cache-7a56de80-4437-4013-96c2-be1937f088e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:35:48 compute-0 nova_compute[188777]: 2026-02-19 20:35:48.306 188781 DEBUG oslo_concurrency.lockutils [req-ad00fc52-7b4f-4e90-a096-40a06d6452f4 req-ec4b8e58-34e6-4232-9500-467c9bd7ccd1 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquired lock "refresh_cache-7a56de80-4437-4013-96c2-be1937f088e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:35:48 compute-0 nova_compute[188777]: 2026-02-19 20:35:48.306 188781 DEBUG nova.network.neutron [req-ad00fc52-7b4f-4e90-a096-40a06d6452f4 req-ec4b8e58-34e6-4232-9500-467c9bd7ccd1 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Refreshing network info cache for port fa073f2b-de2e-4fae-9203-432a59201885 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 19 20:35:48 compute-0 nova_compute[188777]: 2026-02-19 20:35:48.586 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:48 compute-0 nova_compute[188777]: 2026-02-19 20:35:48.721 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:49 compute-0 nova_compute[188777]: 2026-02-19 20:35:49.581 188781 DEBUG nova.compute.manager [req-01f7037c-88df-4880-ab46-fece7987b2f8 req-e7689cca-07b2-41e2-85ae-79087ed8822d 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Received event network-vif-plugged-a8c2bacc-6880-4dc4-a4de-24561426643c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:35:49 compute-0 nova_compute[188777]: 2026-02-19 20:35:49.583 188781 DEBUG oslo_concurrency.lockutils [req-01f7037c-88df-4880-ab46-fece7987b2f8 req-e7689cca-07b2-41e2-85ae-79087ed8822d 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "7cfaa330-b089-4421-aad5-ee9cdec71c71-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:35:49 compute-0 nova_compute[188777]: 2026-02-19 20:35:49.584 188781 DEBUG oslo_concurrency.lockutils [req-01f7037c-88df-4880-ab46-fece7987b2f8 req-e7689cca-07b2-41e2-85ae-79087ed8822d 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "7cfaa330-b089-4421-aad5-ee9cdec71c71-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:35:49 compute-0 nova_compute[188777]: 2026-02-19 20:35:49.585 188781 DEBUG oslo_concurrency.lockutils [req-01f7037c-88df-4880-ab46-fece7987b2f8 req-e7689cca-07b2-41e2-85ae-79087ed8822d 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "7cfaa330-b089-4421-aad5-ee9cdec71c71-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:35:49 compute-0 nova_compute[188777]: 2026-02-19 20:35:49.586 188781 DEBUG nova.compute.manager [req-01f7037c-88df-4880-ab46-fece7987b2f8 req-e7689cca-07b2-41e2-85ae-79087ed8822d 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] No waiting events found dispatching network-vif-plugged-a8c2bacc-6880-4dc4-a4de-24561426643c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 19 20:35:49 compute-0 nova_compute[188777]: 2026-02-19 20:35:49.588 188781 WARNING nova.compute.manager [req-01f7037c-88df-4880-ab46-fece7987b2f8 req-e7689cca-07b2-41e2-85ae-79087ed8822d 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Received unexpected event network-vif-plugged-a8c2bacc-6880-4dc4-a4de-24561426643c for instance with vm_state active and task_state None.
Feb 19 20:35:49 compute-0 nova_compute[188777]: 2026-02-19 20:35:49.589 188781 DEBUG nova.compute.manager [req-01f7037c-88df-4880-ab46-fece7987b2f8 req-e7689cca-07b2-41e2-85ae-79087ed8822d 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Received event network-vif-plugged-b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:35:49 compute-0 nova_compute[188777]: 2026-02-19 20:35:49.590 188781 DEBUG oslo_concurrency.lockutils [req-01f7037c-88df-4880-ab46-fece7987b2f8 req-e7689cca-07b2-41e2-85ae-79087ed8822d 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "da31f324-38ad-4f77-b724-3ef1628be336-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:35:49 compute-0 nova_compute[188777]: 2026-02-19 20:35:49.591 188781 DEBUG oslo_concurrency.lockutils [req-01f7037c-88df-4880-ab46-fece7987b2f8 req-e7689cca-07b2-41e2-85ae-79087ed8822d 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "da31f324-38ad-4f77-b724-3ef1628be336-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:35:49 compute-0 nova_compute[188777]: 2026-02-19 20:35:49.592 188781 DEBUG oslo_concurrency.lockutils [req-01f7037c-88df-4880-ab46-fece7987b2f8 req-e7689cca-07b2-41e2-85ae-79087ed8822d 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "da31f324-38ad-4f77-b724-3ef1628be336-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:35:49 compute-0 nova_compute[188777]: 2026-02-19 20:35:49.593 188781 DEBUG nova.compute.manager [req-01f7037c-88df-4880-ab46-fece7987b2f8 req-e7689cca-07b2-41e2-85ae-79087ed8822d 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Processing event network-vif-plugged-b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 19 20:35:49 compute-0 nova_compute[188777]: 2026-02-19 20:35:49.595 188781 DEBUG nova.compute.manager [req-01f7037c-88df-4880-ab46-fece7987b2f8 req-e7689cca-07b2-41e2-85ae-79087ed8822d 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Received event network-vif-plugged-b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:35:49 compute-0 nova_compute[188777]: 2026-02-19 20:35:49.596 188781 DEBUG oslo_concurrency.lockutils [req-01f7037c-88df-4880-ab46-fece7987b2f8 req-e7689cca-07b2-41e2-85ae-79087ed8822d 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "da31f324-38ad-4f77-b724-3ef1628be336-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:35:49 compute-0 nova_compute[188777]: 2026-02-19 20:35:49.597 188781 DEBUG oslo_concurrency.lockutils [req-01f7037c-88df-4880-ab46-fece7987b2f8 req-e7689cca-07b2-41e2-85ae-79087ed8822d 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "da31f324-38ad-4f77-b724-3ef1628be336-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:35:49 compute-0 nova_compute[188777]: 2026-02-19 20:35:49.598 188781 DEBUG oslo_concurrency.lockutils [req-01f7037c-88df-4880-ab46-fece7987b2f8 req-e7689cca-07b2-41e2-85ae-79087ed8822d 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "da31f324-38ad-4f77-b724-3ef1628be336-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:35:49 compute-0 nova_compute[188777]: 2026-02-19 20:35:49.599 188781 DEBUG nova.compute.manager [req-01f7037c-88df-4880-ab46-fece7987b2f8 req-e7689cca-07b2-41e2-85ae-79087ed8822d 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] No waiting events found dispatching network-vif-plugged-b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 19 20:35:49 compute-0 nova_compute[188777]: 2026-02-19 20:35:49.600 188781 WARNING nova.compute.manager [req-01f7037c-88df-4880-ab46-fece7987b2f8 req-e7689cca-07b2-41e2-85ae-79087ed8822d 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Received unexpected event network-vif-plugged-b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 for instance with vm_state building and task_state spawning.
Feb 19 20:35:49 compute-0 nova_compute[188777]: 2026-02-19 20:35:49.603 188781 DEBUG nova.compute.manager [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 19 20:35:49 compute-0 nova_compute[188777]: 2026-02-19 20:35:49.610 188781 DEBUG nova.virt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Emitting event <LifecycleEvent: 1771533349.6098433, da31f324-38ad-4f77-b724-3ef1628be336 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:35:49 compute-0 nova_compute[188777]: 2026-02-19 20:35:49.611 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: da31f324-38ad-4f77-b724-3ef1628be336] VM Resumed (Lifecycle Event)
Feb 19 20:35:49 compute-0 nova_compute[188777]: 2026-02-19 20:35:49.614 188781 DEBUG nova.virt.libvirt.driver [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 19 20:35:49 compute-0 nova_compute[188777]: 2026-02-19 20:35:49.623 188781 INFO nova.virt.libvirt.driver [-] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Instance spawned successfully.
Feb 19 20:35:49 compute-0 nova_compute[188777]: 2026-02-19 20:35:49.625 188781 DEBUG nova.virt.libvirt.driver [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 19 20:35:49 compute-0 nova_compute[188777]: 2026-02-19 20:35:49.643 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:35:49 compute-0 nova_compute[188777]: 2026-02-19 20:35:49.653 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 19 20:35:49 compute-0 nova_compute[188777]: 2026-02-19 20:35:49.656 188781 DEBUG nova.virt.libvirt.driver [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:35:49 compute-0 nova_compute[188777]: 2026-02-19 20:35:49.657 188781 DEBUG nova.virt.libvirt.driver [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:35:49 compute-0 nova_compute[188777]: 2026-02-19 20:35:49.658 188781 DEBUG nova.virt.libvirt.driver [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:35:49 compute-0 nova_compute[188777]: 2026-02-19 20:35:49.659 188781 DEBUG nova.virt.libvirt.driver [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:35:49 compute-0 nova_compute[188777]: 2026-02-19 20:35:49.659 188781 DEBUG nova.virt.libvirt.driver [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:35:49 compute-0 nova_compute[188777]: 2026-02-19 20:35:49.660 188781 DEBUG nova.virt.libvirt.driver [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:35:49 compute-0 nova_compute[188777]: 2026-02-19 20:35:49.709 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: da31f324-38ad-4f77-b724-3ef1628be336] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 19 20:35:49 compute-0 nova_compute[188777]: 2026-02-19 20:35:49.771 188781 INFO nova.compute.manager [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Took 17.92 seconds to spawn the instance on the hypervisor.
Feb 19 20:35:49 compute-0 nova_compute[188777]: 2026-02-19 20:35:49.771 188781 DEBUG nova.compute.manager [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:35:49 compute-0 nova_compute[188777]: 2026-02-19 20:35:49.868 188781 INFO nova.compute.manager [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Took 18.43 seconds to build instance.
Feb 19 20:35:49 compute-0 nova_compute[188777]: 2026-02-19 20:35:49.892 188781 DEBUG oslo_concurrency.lockutils [None req-da322364-98a9-4769-a752-2b5536bbcc9c 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Lock "da31f324-38ad-4f77-b724-3ef1628be336" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:35:50 compute-0 podman[252663]: 2026-02-19 20:35:50.399980888 +0000 UTC m=+0.084094092 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, version=9.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_id=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, build-date=2026-02-05T04:57:10Z)
Feb 19 20:35:50 compute-0 podman[252664]: 2026-02-19 20:35:50.435128523 +0000 UTC m=+0.119555447 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 19 20:35:51 compute-0 nova_compute[188777]: 2026-02-19 20:35:51.640 188781 DEBUG oslo_concurrency.lockutils [None req-fc80caa4-fda4-4ab4-8623-27c26e53b57b f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Acquiring lock "7a56de80-4437-4013-96c2-be1937f088e1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:35:51 compute-0 nova_compute[188777]: 2026-02-19 20:35:51.641 188781 DEBUG oslo_concurrency.lockutils [None req-fc80caa4-fda4-4ab4-8623-27c26e53b57b f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Lock "7a56de80-4437-4013-96c2-be1937f088e1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:35:51 compute-0 nova_compute[188777]: 2026-02-19 20:35:51.641 188781 DEBUG oslo_concurrency.lockutils [None req-fc80caa4-fda4-4ab4-8623-27c26e53b57b f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Acquiring lock "7a56de80-4437-4013-96c2-be1937f088e1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:35:51 compute-0 nova_compute[188777]: 2026-02-19 20:35:51.642 188781 DEBUG oslo_concurrency.lockutils [None req-fc80caa4-fda4-4ab4-8623-27c26e53b57b f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Lock "7a56de80-4437-4013-96c2-be1937f088e1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:35:51 compute-0 nova_compute[188777]: 2026-02-19 20:35:51.642 188781 DEBUG oslo_concurrency.lockutils [None req-fc80caa4-fda4-4ab4-8623-27c26e53b57b f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Lock "7a56de80-4437-4013-96c2-be1937f088e1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:35:51 compute-0 nova_compute[188777]: 2026-02-19 20:35:51.643 188781 INFO nova.compute.manager [None req-fc80caa4-fda4-4ab4-8623-27c26e53b57b f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Terminating instance
Feb 19 20:35:51 compute-0 nova_compute[188777]: 2026-02-19 20:35:51.644 188781 DEBUG nova.compute.manager [None req-fc80caa4-fda4-4ab4-8623-27c26e53b57b f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 19 20:35:51 compute-0 kernel: tapfa073f2b-de (unregistering): left promiscuous mode
Feb 19 20:35:51 compute-0 NetworkManager[57033]: <info>  [1771533351.6914] device (tapfa073f2b-de): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 19 20:35:51 compute-0 ovn_controller[98843]: 2026-02-19T20:35:51Z|00088|binding|INFO|Releasing lport fa073f2b-de2e-4fae-9203-432a59201885 from this chassis (sb_readonly=0)
Feb 19 20:35:51 compute-0 ovn_controller[98843]: 2026-02-19T20:35:51Z|00089|binding|INFO|Setting lport fa073f2b-de2e-4fae-9203-432a59201885 down in Southbound
Feb 19 20:35:51 compute-0 ovn_controller[98843]: 2026-02-19T20:35:51Z|00090|binding|INFO|Removing iface tapfa073f2b-de ovn-installed in OVS
Feb 19 20:35:51 compute-0 nova_compute[188777]: 2026-02-19 20:35:51.706 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:51 compute-0 nova_compute[188777]: 2026-02-19 20:35:51.711 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:51 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:51.713 108175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:e7:76 10.100.0.14'], port_security=['fa:16:3e:20:e7:76 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '7a56de80-4437-4013-96c2-be1937f088e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6cace72e-1722-4ebe-9704-2d9205c01a28', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e5dd04de830547fc9be85d60a48c5a31', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4860b750-9676-4d87-b85b-18d91151e966', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.210'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=120fdf81-6f33-4b0c-bcc6-c0f7b8146b65, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>], logical_port=fa073f2b-de2e-4fae-9203-432a59201885) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 19 20:35:51 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:51.715 108175 INFO neutron.agent.ovn.metadata.agent [-] Port fa073f2b-de2e-4fae-9203-432a59201885 in datapath 6cace72e-1722-4ebe-9704-2d9205c01a28 unbound from our chassis
Feb 19 20:35:51 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:51.719 108175 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6cace72e-1722-4ebe-9704-2d9205c01a28, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 19 20:35:51 compute-0 nova_compute[188777]: 2026-02-19 20:35:51.721 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:51 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:51.722 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[57c5fd68-c582-419b-b40a-ef4453cb7d42]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:51 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:51.725 108175 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6cace72e-1722-4ebe-9704-2d9205c01a28 namespace which is not needed anymore
Feb 19 20:35:51 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000008.scope: Deactivated successfully.
Feb 19 20:35:51 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000008.scope: Consumed 9.872s CPU time.
Feb 19 20:35:51 compute-0 systemd-machined[158158]: Machine qemu-7-instance-00000008 terminated.
Feb 19 20:35:51 compute-0 nova_compute[188777]: 2026-02-19 20:35:51.896 188781 INFO nova.virt.libvirt.driver [-] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Instance destroyed successfully.
Feb 19 20:35:51 compute-0 nova_compute[188777]: 2026-02-19 20:35:51.896 188781 DEBUG nova.objects.instance [None req-fc80caa4-fda4-4ab4-8623-27c26e53b57b f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Lazy-loading 'resources' on Instance uuid 7a56de80-4437-4013-96c2-be1937f088e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:35:51 compute-0 nova_compute[188777]: 2026-02-19 20:35:51.913 188781 DEBUG nova.virt.libvirt.vif [None req-fc80caa4-fda4-4ab4-8623-27c26e53b57b f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-19T20:35:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-2015353785',display_name='tempest-ServersTestManualDisk-server-2015353785',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-2015353785',id=8,image_ref='17b9bce8-a91b-495d-ac33-cf63893413f9',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFXFDQijSANFDoUVi84SuiAfHVF3spIPoYnKYJFGqNIq4PhfTmqJYMCiMcaurCo/ihMWQ4HRhinwq1C8zYZck26Imnue1gKTPXfH2xEFNH/1E/za2XGNR7k+Iye5A44NOw==',key_name='tempest-keypair-99209297',keypairs=<?>,launch_index=0,launched_at=2026-02-19T20:35:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e5dd04de830547fc9be85d60a48c5a31',ramdisk_id='',reservation_id='r-02xox5bx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='17b9bce8-a91b-495d-ac33-cf63893413f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-377148035',owner_user_name='tempest-ServersTestManualDisk-377148035-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-19T20:35:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f852f439f2394296a1bd7c9dfc0f03cc',uuid=7a56de80-4437-4013-96c2-be1937f088e1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fa073f2b-de2e-4fae-9203-432a59201885", "address": "fa:16:3e:20:e7:76", "network": {"id": "6cace72e-1722-4ebe-9704-2d9205c01a28", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-273808848-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5dd04de830547fc9be85d60a48c5a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa073f2b-de", "ovs_interfaceid": "fa073f2b-de2e-4fae-9203-432a59201885", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 19 20:35:51 compute-0 nova_compute[188777]: 2026-02-19 20:35:51.915 188781 DEBUG nova.network.os_vif_util [None req-fc80caa4-fda4-4ab4-8623-27c26e53b57b f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Converting VIF {"id": "fa073f2b-de2e-4fae-9203-432a59201885", "address": "fa:16:3e:20:e7:76", "network": {"id": "6cace72e-1722-4ebe-9704-2d9205c01a28", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-273808848-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5dd04de830547fc9be85d60a48c5a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa073f2b-de", "ovs_interfaceid": "fa073f2b-de2e-4fae-9203-432a59201885", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 19 20:35:51 compute-0 nova_compute[188777]: 2026-02-19 20:35:51.916 188781 DEBUG nova.network.os_vif_util [None req-fc80caa4-fda4-4ab4-8623-27c26e53b57b f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:e7:76,bridge_name='br-int',has_traffic_filtering=True,id=fa073f2b-de2e-4fae-9203-432a59201885,network=Network(6cace72e-1722-4ebe-9704-2d9205c01a28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa073f2b-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 19 20:35:51 compute-0 nova_compute[188777]: 2026-02-19 20:35:51.917 188781 DEBUG os_vif [None req-fc80caa4-fda4-4ab4-8623-27c26e53b57b f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:e7:76,bridge_name='br-int',has_traffic_filtering=True,id=fa073f2b-de2e-4fae-9203-432a59201885,network=Network(6cace72e-1722-4ebe-9704-2d9205c01a28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa073f2b-de') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 19 20:35:51 compute-0 nova_compute[188777]: 2026-02-19 20:35:51.919 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:51 compute-0 nova_compute[188777]: 2026-02-19 20:35:51.919 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa073f2b-de, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:35:51 compute-0 nova_compute[188777]: 2026-02-19 20:35:51.921 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:51 compute-0 nova_compute[188777]: 2026-02-19 20:35:51.924 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 19 20:35:51 compute-0 nova_compute[188777]: 2026-02-19 20:35:51.926 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:51 compute-0 nova_compute[188777]: 2026-02-19 20:35:51.929 188781 DEBUG nova.network.neutron [req-ad00fc52-7b4f-4e90-a096-40a06d6452f4 req-ec4b8e58-34e6-4232-9500-467c9bd7ccd1 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Updated VIF entry in instance network info cache for port fa073f2b-de2e-4fae-9203-432a59201885. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 19 20:35:51 compute-0 nova_compute[188777]: 2026-02-19 20:35:51.930 188781 DEBUG nova.network.neutron [req-ad00fc52-7b4f-4e90-a096-40a06d6452f4 req-ec4b8e58-34e6-4232-9500-467c9bd7ccd1 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Updating instance_info_cache with network_info: [{"id": "fa073f2b-de2e-4fae-9203-432a59201885", "address": "fa:16:3e:20:e7:76", "network": {"id": "6cace72e-1722-4ebe-9704-2d9205c01a28", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-273808848-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5dd04de830547fc9be85d60a48c5a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa073f2b-de", "ovs_interfaceid": "fa073f2b-de2e-4fae-9203-432a59201885", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:35:51 compute-0 nova_compute[188777]: 2026-02-19 20:35:51.932 188781 INFO os_vif [None req-fc80caa4-fda4-4ab4-8623-27c26e53b57b f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:e7:76,bridge_name='br-int',has_traffic_filtering=True,id=fa073f2b-de2e-4fae-9203-432a59201885,network=Network(6cace72e-1722-4ebe-9704-2d9205c01a28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa073f2b-de')
Feb 19 20:35:51 compute-0 nova_compute[188777]: 2026-02-19 20:35:51.932 188781 INFO nova.virt.libvirt.driver [None req-fc80caa4-fda4-4ab4-8623-27c26e53b57b f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Deleting instance files /var/lib/nova/instances/7a56de80-4437-4013-96c2-be1937f088e1_del
Feb 19 20:35:51 compute-0 nova_compute[188777]: 2026-02-19 20:35:51.933 188781 INFO nova.virt.libvirt.driver [None req-fc80caa4-fda4-4ab4-8623-27c26e53b57b f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Deletion of /var/lib/nova/instances/7a56de80-4437-4013-96c2-be1937f088e1_del complete
Feb 19 20:35:51 compute-0 neutron-haproxy-ovnmeta-6cace72e-1722-4ebe-9704-2d9205c01a28[252517]: [NOTICE]   (252526) : haproxy version is 2.8.14-c23fe91
Feb 19 20:35:51 compute-0 neutron-haproxy-ovnmeta-6cace72e-1722-4ebe-9704-2d9205c01a28[252517]: [NOTICE]   (252526) : path to executable is /usr/sbin/haproxy
Feb 19 20:35:51 compute-0 neutron-haproxy-ovnmeta-6cace72e-1722-4ebe-9704-2d9205c01a28[252517]: [WARNING]  (252526) : Exiting Master process...
Feb 19 20:35:51 compute-0 neutron-haproxy-ovnmeta-6cace72e-1722-4ebe-9704-2d9205c01a28[252517]: [WARNING]  (252526) : Exiting Master process...
Feb 19 20:35:51 compute-0 neutron-haproxy-ovnmeta-6cace72e-1722-4ebe-9704-2d9205c01a28[252517]: [ALERT]    (252526) : Current worker (252528) exited with code 143 (Terminated)
Feb 19 20:35:51 compute-0 neutron-haproxy-ovnmeta-6cace72e-1722-4ebe-9704-2d9205c01a28[252517]: [WARNING]  (252526) : All workers exited. Exiting... (0)
Feb 19 20:35:51 compute-0 systemd[1]: libpod-603d4b4281975919ca4cf7a41d1393703f06630dce3531d7f4ce422bdc6f83c1.scope: Deactivated successfully.
Feb 19 20:35:51 compute-0 podman[252729]: 2026-02-19 20:35:51.955454215 +0000 UTC m=+0.143153184 container died 603d4b4281975919ca4cf7a41d1393703f06630dce3531d7f4ce422bdc6f83c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6cace72e-1722-4ebe-9704-2d9205c01a28, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 19 20:35:52 compute-0 nova_compute[188777]: 2026-02-19 20:35:52.037 188781 DEBUG oslo_concurrency.lockutils [req-ad00fc52-7b4f-4e90-a096-40a06d6452f4 req-ec4b8e58-34e6-4232-9500-467c9bd7ccd1 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Releasing lock "refresh_cache-7a56de80-4437-4013-96c2-be1937f088e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:35:52 compute-0 nova_compute[188777]: 2026-02-19 20:35:52.054 188781 INFO nova.compute.manager [None req-fc80caa4-fda4-4ab4-8623-27c26e53b57b f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Took 0.41 seconds to destroy the instance on the hypervisor.
Feb 19 20:35:52 compute-0 nova_compute[188777]: 2026-02-19 20:35:52.055 188781 DEBUG oslo.service.loopingcall [None req-fc80caa4-fda4-4ab4-8623-27c26e53b57b f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 19 20:35:52 compute-0 nova_compute[188777]: 2026-02-19 20:35:52.056 188781 DEBUG nova.compute.manager [-] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 19 20:35:52 compute-0 nova_compute[188777]: 2026-02-19 20:35:52.056 188781 DEBUG nova.network.neutron [-] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 19 20:35:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-603d4b4281975919ca4cf7a41d1393703f06630dce3531d7f4ce422bdc6f83c1-userdata-shm.mount: Deactivated successfully.
Feb 19 20:35:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-759cb1784f6b9967fd8801acda01c83d35ea9bdbed6995261fff0531acdd20a5-merged.mount: Deactivated successfully.
Feb 19 20:35:52 compute-0 podman[252729]: 2026-02-19 20:35:52.409397225 +0000 UTC m=+0.597096214 container cleanup 603d4b4281975919ca4cf7a41d1393703f06630dce3531d7f4ce422bdc6f83c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6cace72e-1722-4ebe-9704-2d9205c01a28, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127)
Feb 19 20:35:52 compute-0 systemd[1]: libpod-conmon-603d4b4281975919ca4cf7a41d1393703f06630dce3531d7f4ce422bdc6f83c1.scope: Deactivated successfully.
Feb 19 20:35:52 compute-0 podman[252775]: 2026-02-19 20:35:52.917124252 +0000 UTC m=+0.474543254 container remove 603d4b4281975919ca4cf7a41d1393703f06630dce3531d7f4ce422bdc6f83c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6cace72e-1722-4ebe-9704-2d9205c01a28, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 19 20:35:52 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:52.925 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[e6707341-766f-4735-acfc-86b3ad562fc1]: (4, ('Thu Feb 19 08:35:51 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-6cace72e-1722-4ebe-9704-2d9205c01a28 (603d4b4281975919ca4cf7a41d1393703f06630dce3531d7f4ce422bdc6f83c1)\n603d4b4281975919ca4cf7a41d1393703f06630dce3531d7f4ce422bdc6f83c1\nThu Feb 19 08:35:52 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-6cace72e-1722-4ebe-9704-2d9205c01a28 (603d4b4281975919ca4cf7a41d1393703f06630dce3531d7f4ce422bdc6f83c1)\n603d4b4281975919ca4cf7a41d1393703f06630dce3531d7f4ce422bdc6f83c1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:52 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:52.929 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[8b9f503b-57cc-4975-9402-805faf590674]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:52 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:52.930 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6cace72e-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:35:52 compute-0 nova_compute[188777]: 2026-02-19 20:35:52.933 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:52 compute-0 kernel: tap6cace72e-10: left promiscuous mode
Feb 19 20:35:52 compute-0 nova_compute[188777]: 2026-02-19 20:35:52.954 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:52 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:52.963 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[683a0097-d317-415a-857a-752e59fb9cb6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:52 compute-0 nova_compute[188777]: 2026-02-19 20:35:52.975 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:52 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:52.985 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[b5774c68-b2b2-4c52-9894-0b02397719a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:52 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:52.987 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[6e4989f4-b480-417c-b7b4-c800d6d587a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:53 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:53.006 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[7918b7cc-4c25-4134-8d69-20a71d894747]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 485345, 'reachable_time': 36263, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252789, 'error': None, 'target': 'ovnmeta-6cace72e-1722-4ebe-9704-2d9205c01a28', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:53 compute-0 systemd[1]: run-netns-ovnmeta\x2d6cace72e\x2d1722\x2d4ebe\x2d9704\x2d2d9205c01a28.mount: Deactivated successfully.
Feb 19 20:35:53 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:53.014 108698 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6cace72e-1722-4ebe-9704-2d9205c01a28 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 19 20:35:53 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:53.014 108698 DEBUG oslo.privsep.daemon [-] privsep: reply[ef76df64-ecaf-47f3-b062-353c0d53de01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:53 compute-0 nova_compute[188777]: 2026-02-19 20:35:53.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:35:53 compute-0 nova_compute[188777]: 2026-02-19 20:35:53.266 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:35:53 compute-0 nova_compute[188777]: 2026-02-19 20:35:53.342 188781 DEBUG nova.compute.manager [req-7480604c-f710-4979-871e-e323f896b23a req-89750447-ff14-4998-9fff-977c29e51183 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Received event network-changed-a8c2bacc-6880-4dc4-a4de-24561426643c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:35:53 compute-0 nova_compute[188777]: 2026-02-19 20:35:53.343 188781 DEBUG nova.compute.manager [req-7480604c-f710-4979-871e-e323f896b23a req-89750447-ff14-4998-9fff-977c29e51183 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Refreshing instance network info cache due to event network-changed-a8c2bacc-6880-4dc4-a4de-24561426643c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 19 20:35:53 compute-0 nova_compute[188777]: 2026-02-19 20:35:53.344 188781 DEBUG oslo_concurrency.lockutils [req-7480604c-f710-4979-871e-e323f896b23a req-89750447-ff14-4998-9fff-977c29e51183 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "refresh_cache-7cfaa330-b089-4421-aad5-ee9cdec71c71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:35:53 compute-0 nova_compute[188777]: 2026-02-19 20:35:53.344 188781 DEBUG oslo_concurrency.lockutils [req-7480604c-f710-4979-871e-e323f896b23a req-89750447-ff14-4998-9fff-977c29e51183 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquired lock "refresh_cache-7cfaa330-b089-4421-aad5-ee9cdec71c71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:35:53 compute-0 nova_compute[188777]: 2026-02-19 20:35:53.345 188781 DEBUG nova.network.neutron [req-7480604c-f710-4979-871e-e323f896b23a req-89750447-ff14-4998-9fff-977c29e51183 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Refreshing network info cache for port a8c2bacc-6880-4dc4-a4de-24561426643c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 19 20:35:53 compute-0 nova_compute[188777]: 2026-02-19 20:35:53.725 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:54 compute-0 podman[252790]: 2026-02-19 20:35:54.379706684 +0000 UTC m=+0.064831062 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Feb 19 20:35:54 compute-0 nova_compute[188777]: 2026-02-19 20:35:54.885 188781 DEBUG nova.network.neutron [-] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:35:54 compute-0 nova_compute[188777]: 2026-02-19 20:35:54.906 188781 INFO nova.compute.manager [-] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Took 2.85 seconds to deallocate network for instance.
Feb 19 20:35:54 compute-0 nova_compute[188777]: 2026-02-19 20:35:54.959 188781 DEBUG oslo_concurrency.lockutils [None req-fc80caa4-fda4-4ab4-8623-27c26e53b57b f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:35:54 compute-0 nova_compute[188777]: 2026-02-19 20:35:54.960 188781 DEBUG oslo_concurrency.lockutils [None req-fc80caa4-fda4-4ab4-8623-27c26e53b57b f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.021 188781 DEBUG oslo_concurrency.lockutils [None req-33b1439c-5159-4293-adcb-dce08b87bcae 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Acquiring lock "7cfaa330-b089-4421-aad5-ee9cdec71c71" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.022 188781 DEBUG oslo_concurrency.lockutils [None req-33b1439c-5159-4293-adcb-dce08b87bcae 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Lock "7cfaa330-b089-4421-aad5-ee9cdec71c71" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.023 188781 DEBUG oslo_concurrency.lockutils [None req-33b1439c-5159-4293-adcb-dce08b87bcae 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Acquiring lock "7cfaa330-b089-4421-aad5-ee9cdec71c71-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.024 188781 DEBUG oslo_concurrency.lockutils [None req-33b1439c-5159-4293-adcb-dce08b87bcae 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Lock "7cfaa330-b089-4421-aad5-ee9cdec71c71-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.025 188781 DEBUG oslo_concurrency.lockutils [None req-33b1439c-5159-4293-adcb-dce08b87bcae 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Lock "7cfaa330-b089-4421-aad5-ee9cdec71c71-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.027 188781 INFO nova.compute.manager [None req-33b1439c-5159-4293-adcb-dce08b87bcae 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Terminating instance
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.029 188781 DEBUG nova.compute.manager [None req-33b1439c-5159-4293-adcb-dce08b87bcae 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 19 20:35:55 compute-0 kernel: tapa8c2bacc-68 (unregistering): left promiscuous mode
Feb 19 20:35:55 compute-0 NetworkManager[57033]: <info>  [1771533355.0815] device (tapa8c2bacc-68): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 19 20:35:55 compute-0 ovn_controller[98843]: 2026-02-19T20:35:55Z|00091|binding|INFO|Releasing lport a8c2bacc-6880-4dc4-a4de-24561426643c from this chassis (sb_readonly=0)
Feb 19 20:35:55 compute-0 ovn_controller[98843]: 2026-02-19T20:35:55Z|00092|binding|INFO|Setting lport a8c2bacc-6880-4dc4-a4de-24561426643c down in Southbound
Feb 19 20:35:55 compute-0 ovn_controller[98843]: 2026-02-19T20:35:55Z|00093|binding|INFO|Removing iface tapa8c2bacc-68 ovn-installed in OVS
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.086 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:55 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:55.092 108175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:3a:32 10.100.0.10'], port_security=['fa:16:3e:2e:3a:32 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7cfaa330-b089-4421-aad5-ee9cdec71c71', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01786654-eac7-4d52-bce1-6a98f80c6941', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65e6bca909aa4dd3ab1eecef7ed2aa09', 'neutron:revision_number': '4', 'neutron:security_group_ids': '63c5edfd-0baf-46d0-bc2d-21bd65015788', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.200'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7e2aeadf-084b-41a0-880d-b0e27a6eeaf2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>], logical_port=a8c2bacc-6880-4dc4-a4de-24561426643c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 19 20:35:55 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:55.093 108175 INFO neutron.agent.ovn.metadata.agent [-] Port a8c2bacc-6880-4dc4-a4de-24561426643c in datapath 01786654-eac7-4d52-bce1-6a98f80c6941 unbound from our chassis
Feb 19 20:35:55 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:55.095 108175 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 01786654-eac7-4d52-bce1-6a98f80c6941, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 19 20:35:55 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:55.097 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[4e21cdac-86c9-4988-8a3a-e81780b6727d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:55 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:55.098 108175 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-01786654-eac7-4d52-bce1-6a98f80c6941 namespace which is not needed anymore
Feb 19 20:35:55 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Feb 19 20:35:55 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 10.288s CPU time.
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.125 188781 DEBUG nova.compute.provider_tree [None req-fc80caa4-fda4-4ab4-8623-27c26e53b57b f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:35:55 compute-0 systemd-machined[158158]: Machine qemu-6-instance-00000006 terminated.
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.157 188781 DEBUG nova.scheduler.client.report [None req-fc80caa4-fda4-4ab4-8623-27c26e53b57b f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.185 188781 DEBUG oslo_concurrency.lockutils [None req-fc80caa4-fda4-4ab4-8623-27c26e53b57b f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.225s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.219 188781 INFO nova.scheduler.client.report [None req-fc80caa4-fda4-4ab4-8623-27c26e53b57b f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Deleted allocations for instance 7a56de80-4437-4013-96c2-be1937f088e1
Feb 19 20:35:55 compute-0 neutron-haproxy-ovnmeta-01786654-eac7-4d52-bce1-6a98f80c6941[252354]: [NOTICE]   (252374) : haproxy version is 2.8.14-c23fe91
Feb 19 20:35:55 compute-0 neutron-haproxy-ovnmeta-01786654-eac7-4d52-bce1-6a98f80c6941[252354]: [NOTICE]   (252374) : path to executable is /usr/sbin/haproxy
Feb 19 20:35:55 compute-0 neutron-haproxy-ovnmeta-01786654-eac7-4d52-bce1-6a98f80c6941[252354]: [WARNING]  (252374) : Exiting Master process...
Feb 19 20:35:55 compute-0 neutron-haproxy-ovnmeta-01786654-eac7-4d52-bce1-6a98f80c6941[252354]: [WARNING]  (252374) : Exiting Master process...
Feb 19 20:35:55 compute-0 neutron-haproxy-ovnmeta-01786654-eac7-4d52-bce1-6a98f80c6941[252354]: [ALERT]    (252374) : Current worker (252381) exited with code 143 (Terminated)
Feb 19 20:35:55 compute-0 neutron-haproxy-ovnmeta-01786654-eac7-4d52-bce1-6a98f80c6941[252354]: [WARNING]  (252374) : All workers exited. Exiting... (0)
Feb 19 20:35:55 compute-0 systemd[1]: libpod-e903765d830f67f61a79973ba475c51bebb048779921fef9642c7631008cbc28.scope: Deactivated successfully.
Feb 19 20:35:55 compute-0 podman[252830]: 2026-02-19 20:35:55.2457534 +0000 UTC m=+0.053108296 container died e903765d830f67f61a79973ba475c51bebb048779921fef9642c7631008cbc28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01786654-eac7-4d52-bce1-6a98f80c6941, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.264 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.270 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e903765d830f67f61a79973ba475c51bebb048779921fef9642c7631008cbc28-userdata-shm.mount: Deactivated successfully.
Feb 19 20:35:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc72e41e0f1788eeaa2f231550ca7ca21d609b01f4449f7fbc2377e3a688378d-merged.mount: Deactivated successfully.
Feb 19 20:35:55 compute-0 podman[252830]: 2026-02-19 20:35:55.301063794 +0000 UTC m=+0.108418650 container cleanup e903765d830f67f61a79973ba475c51bebb048779921fef9642c7631008cbc28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01786654-eac7-4d52-bce1-6a98f80c6941, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 19 20:35:55 compute-0 systemd[1]: libpod-conmon-e903765d830f67f61a79973ba475c51bebb048779921fef9642c7631008cbc28.scope: Deactivated successfully.
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.316 188781 INFO nova.virt.libvirt.driver [-] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Instance destroyed successfully.
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.316 188781 DEBUG nova.objects.instance [None req-33b1439c-5159-4293-adcb-dce08b87bcae 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Lazy-loading 'resources' on Instance uuid 7cfaa330-b089-4421-aad5-ee9cdec71c71 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.365 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.366 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.366 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.366 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:35:55 compute-0 podman[252876]: 2026-02-19 20:35:55.372337646 +0000 UTC m=+0.048482592 container remove e903765d830f67f61a79973ba475c51bebb048779921fef9642c7631008cbc28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01786654-eac7-4d52-bce1-6a98f80c6941, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 19 20:35:55 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:55.401 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[fff48d73-0f67-41c4-ab46-4baaaf1bb6c0]: (4, ('Thu Feb 19 08:35:55 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-01786654-eac7-4d52-bce1-6a98f80c6941 (e903765d830f67f61a79973ba475c51bebb048779921fef9642c7631008cbc28)\ne903765d830f67f61a79973ba475c51bebb048779921fef9642c7631008cbc28\nThu Feb 19 08:35:55 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-01786654-eac7-4d52-bce1-6a98f80c6941 (e903765d830f67f61a79973ba475c51bebb048779921fef9642c7631008cbc28)\ne903765d830f67f61a79973ba475c51bebb048779921fef9642c7631008cbc28\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.428 188781 DEBUG nova.virt.libvirt.vif [None req-33b1439c-5159-4293-adcb-dce08b87bcae 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-19T20:35:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1237028603',display_name='tempest-ServersTestJSON-server-1237028603',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1237028603',id=6,image_ref='17b9bce8-a91b-495d-ac33-cf63893413f9',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN6tJI6UCEbI4YMm7Ut3tVcGMmZaGl8BsiTYGl/tCElZtNEZ2yrgIB1FcS/+HnoInFbVW4qmy7nFzGE1z6ZPGV9XAAdWF32PypqPQrQCe9pp/EbRi87hRAVImYrLAxTPRg==',key_name='tempest-keypair-253539883',keypairs=<?>,launch_index=0,launched_at=2026-02-19T20:35:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='65e6bca909aa4dd3ab1eecef7ed2aa09',ramdisk_id='',reservation_id='r-3jipzoix',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='17b9bce8-a91b-495d-ac33-cf63893413f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1140008513',owner_user_name='tempest-ServersTestJSON-1140008513-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-19T20:35:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1a7ed38b6d6a44dabe6c44e6375b7b29',uuid=7cfaa330-b089-4421-aad5-ee9cdec71c71,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a8c2bacc-6880-4dc4-a4de-24561426643c", "address": "fa:16:3e:2e:3a:32", "network": {"id": "01786654-eac7-4d52-bce1-6a98f80c6941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1217512163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65e6bca909aa4dd3ab1eecef7ed2aa09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c2bacc-68", "ovs_interfaceid": "a8c2bacc-6880-4dc4-a4de-24561426643c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.428 188781 DEBUG nova.network.os_vif_util [None req-33b1439c-5159-4293-adcb-dce08b87bcae 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Converting VIF {"id": "a8c2bacc-6880-4dc4-a4de-24561426643c", "address": "fa:16:3e:2e:3a:32", "network": {"id": "01786654-eac7-4d52-bce1-6a98f80c6941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1217512163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65e6bca909aa4dd3ab1eecef7ed2aa09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c2bacc-68", "ovs_interfaceid": "a8c2bacc-6880-4dc4-a4de-24561426643c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.429 188781 DEBUG nova.network.os_vif_util [None req-33b1439c-5159-4293-adcb-dce08b87bcae 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:3a:32,bridge_name='br-int',has_traffic_filtering=True,id=a8c2bacc-6880-4dc4-a4de-24561426643c,network=Network(01786654-eac7-4d52-bce1-6a98f80c6941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8c2bacc-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.429 188781 DEBUG os_vif [None req-33b1439c-5159-4293-adcb-dce08b87bcae 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:3a:32,bridge_name='br-int',has_traffic_filtering=True,id=a8c2bacc-6880-4dc4-a4de-24561426643c,network=Network(01786654-eac7-4d52-bce1-6a98f80c6941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8c2bacc-68') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.431 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.432 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa8c2bacc-68, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:35:55 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:55.427 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[052b0c34-8bcc-4d5f-b954-20b3de5c626f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:55 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:55.433 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01786654-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.433 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.437 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:55 compute-0 kernel: tap01786654-e0: left promiscuous mode
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.445 188781 INFO os_vif [None req-33b1439c-5159-4293-adcb-dce08b87bcae 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:3a:32,bridge_name='br-int',has_traffic_filtering=True,id=a8c2bacc-6880-4dc4-a4de-24561426643c,network=Network(01786654-eac7-4d52-bce1-6a98f80c6941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8c2bacc-68')
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.446 188781 INFO nova.virt.libvirt.driver [None req-33b1439c-5159-4293-adcb-dce08b87bcae 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Deleting instance files /var/lib/nova/instances/7cfaa330-b089-4421-aad5-ee9cdec71c71_del
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.446 188781 INFO nova.virt.libvirt.driver [None req-33b1439c-5159-4293-adcb-dce08b87bcae 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Deletion of /var/lib/nova/instances/7cfaa330-b089-4421-aad5-ee9cdec71c71_del complete
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.453 188781 DEBUG oslo_concurrency.lockutils [None req-fc80caa4-fda4-4ab4-8623-27c26e53b57b f852f439f2394296a1bd7c9dfc0f03cc e5dd04de830547fc9be85d60a48c5a31 - - default default] Lock "7a56de80-4437-4013-96c2-be1937f088e1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.457 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:55 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:55.460 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[38e2ce4d-0213-4068-bea5-0f13ed4deb1f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.462 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:55 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:55.474 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[ab76d1a7-34e0-4f3f-94cc-507f2585505e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:55 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:55.475 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[78f9564f-e07a-4b16-9c46-58aaeeaba6d2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:55 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:55.492 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[f13e0069-b64e-4a10-a860-df99abe0e880]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 484666, 'reachable_time': 20057, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252889, 'error': None, 'target': 'ovnmeta-01786654-eac7-4d52-bce1-6a98f80c6941', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:55 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:55.495 108698 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-01786654-eac7-4d52-bce1-6a98f80c6941 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 19 20:35:55 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:55.496 108698 DEBUG oslo.privsep.daemon [-] privsep: reply[76a0eb39-f4d3-4381-9acd-bfc570a5af62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:35:55 compute-0 systemd[1]: run-netns-ovnmeta\x2d01786654\x2deac7\x2d4d52\x2dbce1\x2d6a98f80c6941.mount: Deactivated successfully.
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.502 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Periodic task is updating the host stats, it is trying to get disk info for instance-00000006, but the backing disk storage was removed by a concurrent operation such as resize. Error: No disk at /var/lib/nova/instances/7cfaa330-b089-4421-aad5-ee9cdec71c71/disk: nova.exception.DiskNotFound: No disk at /var/lib/nova/instances/7cfaa330-b089-4421-aad5-ee9cdec71c71/disk
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.504 188781 INFO nova.compute.manager [None req-33b1439c-5159-4293-adcb-dce08b87bcae 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Took 0.47 seconds to destroy the instance on the hypervisor.
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.505 188781 DEBUG oslo.service.loopingcall [None req-33b1439c-5159-4293-adcb-dce08b87bcae 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.505 188781 DEBUG nova.compute.manager [-] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.505 188781 DEBUG nova.network.neutron [-] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.513 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/da31f324-38ad-4f77-b724-3ef1628be336/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.566 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/da31f324-38ad-4f77-b724-3ef1628be336/disk --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.567 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/da31f324-38ad-4f77-b724-3ef1628be336/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.615 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/da31f324-38ad-4f77-b724-3ef1628be336/disk --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.663 188781 DEBUG nova.compute.manager [req-37467026-18c6-4c2e-a56a-2f1dda6ee174 req-e22cd7c0-4fa4-48b1-97df-9f155feb1e29 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Received event network-vif-deleted-fa073f2b-de2e-4fae-9203-432a59201885 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.895 188781 DEBUG nova.compute.manager [req-e2ff3bf2-592c-4f17-b955-1d6007096afb req-f7c752a4-7921-4ad1-8c9c-4bb67da2d90f 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Received event network-vif-unplugged-a8c2bacc-6880-4dc4-a4de-24561426643c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.895 188781 DEBUG oslo_concurrency.lockutils [req-e2ff3bf2-592c-4f17-b955-1d6007096afb req-f7c752a4-7921-4ad1-8c9c-4bb67da2d90f 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "7cfaa330-b089-4421-aad5-ee9cdec71c71-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.896 188781 DEBUG oslo_concurrency.lockutils [req-e2ff3bf2-592c-4f17-b955-1d6007096afb req-f7c752a4-7921-4ad1-8c9c-4bb67da2d90f 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "7cfaa330-b089-4421-aad5-ee9cdec71c71-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.896 188781 DEBUG oslo_concurrency.lockutils [req-e2ff3bf2-592c-4f17-b955-1d6007096afb req-f7c752a4-7921-4ad1-8c9c-4bb67da2d90f 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "7cfaa330-b089-4421-aad5-ee9cdec71c71-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.896 188781 DEBUG nova.compute.manager [req-e2ff3bf2-592c-4f17-b955-1d6007096afb req-f7c752a4-7921-4ad1-8c9c-4bb67da2d90f 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] No waiting events found dispatching network-vif-unplugged-a8c2bacc-6880-4dc4-a4de-24561426643c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.896 188781 DEBUG nova.compute.manager [req-e2ff3bf2-592c-4f17-b955-1d6007096afb req-f7c752a4-7921-4ad1-8c9c-4bb67da2d90f 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Received event network-vif-unplugged-a8c2bacc-6880-4dc4-a4de-24561426643c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.907 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.909 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5179MB free_disk=72.20288848876953GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.909 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.909 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.991 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 7cfaa330-b089-4421-aad5-ee9cdec71c71 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.992 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance da31f324-38ad-4f77-b724-3ef1628be336 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.992 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:35:55 compute-0 nova_compute[188777]: 2026-02-19 20:35:55.992 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:35:56 compute-0 nova_compute[188777]: 2026-02-19 20:35:56.045 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:35:56 compute-0 nova_compute[188777]: 2026-02-19 20:35:56.051 188781 DEBUG nova.network.neutron [req-7480604c-f710-4979-871e-e323f896b23a req-89750447-ff14-4998-9fff-977c29e51183 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Updated VIF entry in instance network info cache for port a8c2bacc-6880-4dc4-a4de-24561426643c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 19 20:35:56 compute-0 nova_compute[188777]: 2026-02-19 20:35:56.052 188781 DEBUG nova.network.neutron [req-7480604c-f710-4979-871e-e323f896b23a req-89750447-ff14-4998-9fff-977c29e51183 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Updating instance_info_cache with network_info: [{"id": "a8c2bacc-6880-4dc4-a4de-24561426643c", "address": "fa:16:3e:2e:3a:32", "network": {"id": "01786654-eac7-4d52-bce1-6a98f80c6941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1217512163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65e6bca909aa4dd3ab1eecef7ed2aa09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c2bacc-68", "ovs_interfaceid": "a8c2bacc-6880-4dc4-a4de-24561426643c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:35:56 compute-0 nova_compute[188777]: 2026-02-19 20:35:56.064 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:35:56 compute-0 nova_compute[188777]: 2026-02-19 20:35:56.075 188781 DEBUG oslo_concurrency.lockutils [req-7480604c-f710-4979-871e-e323f896b23a req-89750447-ff14-4998-9fff-977c29e51183 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Releasing lock "refresh_cache-7cfaa330-b089-4421-aad5-ee9cdec71c71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:35:56 compute-0 nova_compute[188777]: 2026-02-19 20:35:56.094 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:35:56 compute-0 nova_compute[188777]: 2026-02-19 20:35:56.094 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.185s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:35:57 compute-0 nova_compute[188777]: 2026-02-19 20:35:57.089 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:35:57 compute-0 sshd-session[252867]: Received disconnect from 103.119.94.10 port 33368:11: Bye Bye [preauth]
Feb 19 20:35:57 compute-0 sshd-session[252867]: Disconnected from authenticating user root 103.119.94.10 port 33368 [preauth]
Feb 19 20:35:57 compute-0 nova_compute[188777]: 2026-02-19 20:35:57.231 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:57 compute-0 nova_compute[188777]: 2026-02-19 20:35:57.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:35:57 compute-0 nova_compute[188777]: 2026-02-19 20:35:57.264 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:35:57 compute-0 nova_compute[188777]: 2026-02-19 20:35:57.264 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 19 20:35:57 compute-0 nova_compute[188777]: 2026-02-19 20:35:57.309 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Feb 19 20:35:57 compute-0 nova_compute[188777]: 2026-02-19 20:35:57.575 188781 DEBUG nova.network.neutron [-] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:35:57 compute-0 nova_compute[188777]: 2026-02-19 20:35:57.604 188781 INFO nova.compute.manager [-] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Took 2.10 seconds to deallocate network for instance.
Feb 19 20:35:57 compute-0 nova_compute[188777]: 2026-02-19 20:35:57.663 188781 DEBUG oslo_concurrency.lockutils [None req-33b1439c-5159-4293-adcb-dce08b87bcae 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:35:57 compute-0 nova_compute[188777]: 2026-02-19 20:35:57.664 188781 DEBUG oslo_concurrency.lockutils [None req-33b1439c-5159-4293-adcb-dce08b87bcae 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:35:57 compute-0 nova_compute[188777]: 2026-02-19 20:35:57.764 188781 DEBUG nova.compute.provider_tree [None req-33b1439c-5159-4293-adcb-dce08b87bcae 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:35:57 compute-0 nova_compute[188777]: 2026-02-19 20:35:57.784 188781 DEBUG nova.scheduler.client.report [None req-33b1439c-5159-4293-adcb-dce08b87bcae 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:35:57 compute-0 nova_compute[188777]: 2026-02-19 20:35:57.816 188781 DEBUG oslo_concurrency.lockutils [None req-33b1439c-5159-4293-adcb-dce08b87bcae 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.153s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:35:57 compute-0 nova_compute[188777]: 2026-02-19 20:35:57.822 188781 DEBUG nova.compute.manager [req-3726964c-1495-4e1b-8774-bfbbdf898bbe req-6dedaa63-8d4b-4c46-8e6e-557311ce211e 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Received event network-changed-b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:35:57 compute-0 nova_compute[188777]: 2026-02-19 20:35:57.822 188781 DEBUG nova.compute.manager [req-3726964c-1495-4e1b-8774-bfbbdf898bbe req-6dedaa63-8d4b-4c46-8e6e-557311ce211e 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Refreshing instance network info cache due to event network-changed-b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 19 20:35:57 compute-0 nova_compute[188777]: 2026-02-19 20:35:57.823 188781 DEBUG oslo_concurrency.lockutils [req-3726964c-1495-4e1b-8774-bfbbdf898bbe req-6dedaa63-8d4b-4c46-8e6e-557311ce211e 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "refresh_cache-da31f324-38ad-4f77-b724-3ef1628be336" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:35:57 compute-0 nova_compute[188777]: 2026-02-19 20:35:57.823 188781 DEBUG oslo_concurrency.lockutils [req-3726964c-1495-4e1b-8774-bfbbdf898bbe req-6dedaa63-8d4b-4c46-8e6e-557311ce211e 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquired lock "refresh_cache-da31f324-38ad-4f77-b724-3ef1628be336" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:35:57 compute-0 nova_compute[188777]: 2026-02-19 20:35:57.823 188781 DEBUG nova.network.neutron [req-3726964c-1495-4e1b-8774-bfbbdf898bbe req-6dedaa63-8d4b-4c46-8e6e-557311ce211e 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Refreshing network info cache for port b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 19 20:35:57 compute-0 nova_compute[188777]: 2026-02-19 20:35:57.847 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "refresh_cache-da31f324-38ad-4f77-b724-3ef1628be336" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:35:57 compute-0 nova_compute[188777]: 2026-02-19 20:35:57.852 188781 INFO nova.scheduler.client.report [None req-33b1439c-5159-4293-adcb-dce08b87bcae 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Deleted allocations for instance 7cfaa330-b089-4421-aad5-ee9cdec71c71
Feb 19 20:35:57 compute-0 nova_compute[188777]: 2026-02-19 20:35:57.951 188781 DEBUG oslo_concurrency.lockutils [None req-33b1439c-5159-4293-adcb-dce08b87bcae 1a7ed38b6d6a44dabe6c44e6375b7b29 65e6bca909aa4dd3ab1eecef7ed2aa09 - - default default] Lock "7cfaa330-b089-4421-aad5-ee9cdec71c71" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.928s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:35:58 compute-0 nova_compute[188777]: 2026-02-19 20:35:58.066 188781 DEBUG nova.compute.manager [req-51304503-4a80-4c6c-8222-d8083ec028e8 req-1a27de8b-1c7f-4bcd-bb38-45a69af6506c 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Received event network-vif-plugged-a8c2bacc-6880-4dc4-a4de-24561426643c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:35:58 compute-0 nova_compute[188777]: 2026-02-19 20:35:58.067 188781 DEBUG oslo_concurrency.lockutils [req-51304503-4a80-4c6c-8222-d8083ec028e8 req-1a27de8b-1c7f-4bcd-bb38-45a69af6506c 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "7cfaa330-b089-4421-aad5-ee9cdec71c71-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:35:58 compute-0 nova_compute[188777]: 2026-02-19 20:35:58.067 188781 DEBUG oslo_concurrency.lockutils [req-51304503-4a80-4c6c-8222-d8083ec028e8 req-1a27de8b-1c7f-4bcd-bb38-45a69af6506c 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "7cfaa330-b089-4421-aad5-ee9cdec71c71-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:35:58 compute-0 nova_compute[188777]: 2026-02-19 20:35:58.067 188781 DEBUG oslo_concurrency.lockutils [req-51304503-4a80-4c6c-8222-d8083ec028e8 req-1a27de8b-1c7f-4bcd-bb38-45a69af6506c 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "7cfaa330-b089-4421-aad5-ee9cdec71c71-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:35:58 compute-0 nova_compute[188777]: 2026-02-19 20:35:58.068 188781 DEBUG nova.compute.manager [req-51304503-4a80-4c6c-8222-d8083ec028e8 req-1a27de8b-1c7f-4bcd-bb38-45a69af6506c 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] No waiting events found dispatching network-vif-plugged-a8c2bacc-6880-4dc4-a4de-24561426643c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 19 20:35:58 compute-0 nova_compute[188777]: 2026-02-19 20:35:58.068 188781 WARNING nova.compute.manager [req-51304503-4a80-4c6c-8222-d8083ec028e8 req-1a27de8b-1c7f-4bcd-bb38-45a69af6506c 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Received unexpected event network-vif-plugged-a8c2bacc-6880-4dc4-a4de-24561426643c for instance with vm_state deleted and task_state None.
Feb 19 20:35:58 compute-0 podman[252897]: 2026-02-19 20:35:58.379872387 +0000 UTC m=+0.065354598 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Feb 19 20:35:58 compute-0 podman[252896]: 2026-02-19 20:35:58.418737379 +0000 UTC m=+0.108730270 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, version=9.4, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, release=1214.1726694543, managed_by=edpm_ansible, io.openshift.expose-services=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, build-date=2024-09-18T21:23:30, config_id=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=ubi9)
Feb 19 20:35:58 compute-0 nova_compute[188777]: 2026-02-19 20:35:58.729 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:59 compute-0 podman[204724]: time="2026-02-19T20:35:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:35:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:35:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 19 20:35:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:35:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4378 "" "Go-http-client/1.1"
Feb 19 20:35:59 compute-0 nova_compute[188777]: 2026-02-19 20:35:59.973 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:35:59 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:59.972 108175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1e:ad:15', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:0d:ba:1d:25:53'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 19 20:35:59 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:35:59.975 108175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 19 20:36:00 compute-0 nova_compute[188777]: 2026-02-19 20:36:00.036 188781 DEBUG nova.network.neutron [req-3726964c-1495-4e1b-8774-bfbbdf898bbe req-6dedaa63-8d4b-4c46-8e6e-557311ce211e 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Updated VIF entry in instance network info cache for port b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 19 20:36:00 compute-0 nova_compute[188777]: 2026-02-19 20:36:00.037 188781 DEBUG nova.network.neutron [req-3726964c-1495-4e1b-8774-bfbbdf898bbe req-6dedaa63-8d4b-4c46-8e6e-557311ce211e 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Updating instance_info_cache with network_info: [{"id": "b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2", "address": "fa:16:3e:c6:08:9f", "network": {"id": "d02e853c-7c37-4c12-a959-0da0ff097734", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-432434488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c8b3e035bb347acad9c4027457ee296", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9a6ef82-e3", "ovs_interfaceid": "b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:36:00 compute-0 nova_compute[188777]: 2026-02-19 20:36:00.066 188781 DEBUG oslo_concurrency.lockutils [req-3726964c-1495-4e1b-8774-bfbbdf898bbe req-6dedaa63-8d4b-4c46-8e6e-557311ce211e 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Releasing lock "refresh_cache-da31f324-38ad-4f77-b724-3ef1628be336" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:36:00 compute-0 nova_compute[188777]: 2026-02-19 20:36:00.066 188781 DEBUG nova.compute.manager [req-3726964c-1495-4e1b-8774-bfbbdf898bbe req-6dedaa63-8d4b-4c46-8e6e-557311ce211e 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Received event network-vif-deleted-a8c2bacc-6880-4dc4-a4de-24561426643c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:36:00 compute-0 nova_compute[188777]: 2026-02-19 20:36:00.067 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquired lock "refresh_cache-da31f324-38ad-4f77-b724-3ef1628be336" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:36:00 compute-0 nova_compute[188777]: 2026-02-19 20:36:00.067 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 19 20:36:00 compute-0 nova_compute[188777]: 2026-02-19 20:36:00.068 188781 DEBUG nova.objects.instance [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid da31f324-38ad-4f77-b724-3ef1628be336 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:36:00 compute-0 nova_compute[188777]: 2026-02-19 20:36:00.435 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:00 compute-0 nova_compute[188777]: 2026-02-19 20:36:00.867 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:01 compute-0 openstack_network_exporter[207898]: ERROR   20:36:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:36:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:36:01 compute-0 openstack_network_exporter[207898]: ERROR   20:36:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:36:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:36:02 compute-0 podman[252934]: 2026-02-19 20:36:02.381134514 +0000 UTC m=+0.069311612 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 19 20:36:03 compute-0 nova_compute[188777]: 2026-02-19 20:36:03.256 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Updating instance_info_cache with network_info: [{"id": "b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2", "address": "fa:16:3e:c6:08:9f", "network": {"id": "d02e853c-7c37-4c12-a959-0da0ff097734", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-432434488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c8b3e035bb347acad9c4027457ee296", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9a6ef82-e3", "ovs_interfaceid": "b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:36:03 compute-0 nova_compute[188777]: 2026-02-19 20:36:03.278 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Releasing lock "refresh_cache-da31f324-38ad-4f77-b724-3ef1628be336" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:36:03 compute-0 nova_compute[188777]: 2026-02-19 20:36:03.279 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 19 20:36:03 compute-0 nova_compute[188777]: 2026-02-19 20:36:03.279 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:36:03 compute-0 nova_compute[188777]: 2026-02-19 20:36:03.280 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:36:03 compute-0 nova_compute[188777]: 2026-02-19 20:36:03.280 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:36:03 compute-0 nova_compute[188777]: 2026-02-19 20:36:03.731 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:04 compute-0 nova_compute[188777]: 2026-02-19 20:36:04.365 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:04 compute-0 podman[252958]: 2026-02-19 20:36:04.428852186 +0000 UTC m=+0.109113223 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:36:05 compute-0 ovn_controller[98843]: 2026-02-19T20:36:05Z|00094|binding|INFO|Releasing lport c4f25fb9-c5df-4323-a436-ca67d28f2bc3 from this chassis (sb_readonly=0)
Feb 19 20:36:05 compute-0 nova_compute[188777]: 2026-02-19 20:36:05.200 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:05 compute-0 nova_compute[188777]: 2026-02-19 20:36:05.440 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:06 compute-0 nova_compute[188777]: 2026-02-19 20:36:06.423 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:06 compute-0 nova_compute[188777]: 2026-02-19 20:36:06.894 188781 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1771533351.8938012, 7a56de80-4437-4013-96c2-be1937f088e1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:36:06 compute-0 nova_compute[188777]: 2026-02-19 20:36:06.894 188781 INFO nova.compute.manager [-] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] VM Stopped (Lifecycle Event)
Feb 19 20:36:06 compute-0 nova_compute[188777]: 2026-02-19 20:36:06.915 188781 DEBUG nova.compute.manager [None req-de59c135-392b-4942-8296-98228efc7e1b - - - - - -] [instance: 7a56de80-4437-4013-96c2-be1937f088e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:36:06 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:06.979 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e2fe6bb6-fad0-4563-8388-215a30f03e3f, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:36:07 compute-0 podman[252976]: 2026-02-19 20:36:07.399443405 +0000 UTC m=+0.083581076 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller)
Feb 19 20:36:08 compute-0 nova_compute[188777]: 2026-02-19 20:36:08.734 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:09 compute-0 ovn_controller[98843]: 2026-02-19T20:36:09Z|00095|binding|INFO|Releasing lport c4f25fb9-c5df-4323-a436-ca67d28f2bc3 from this chassis (sb_readonly=0)
Feb 19 20:36:09 compute-0 nova_compute[188777]: 2026-02-19 20:36:09.497 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:09 compute-0 nova_compute[188777]: 2026-02-19 20:36:09.681 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:10 compute-0 nova_compute[188777]: 2026-02-19 20:36:10.311 188781 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1771533355.309739, 7cfaa330-b089-4421-aad5-ee9cdec71c71 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:36:10 compute-0 nova_compute[188777]: 2026-02-19 20:36:10.312 188781 INFO nova.compute.manager [-] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] VM Stopped (Lifecycle Event)
Feb 19 20:36:10 compute-0 nova_compute[188777]: 2026-02-19 20:36:10.344 188781 DEBUG nova.compute.manager [None req-ac514e0c-8220-40a5-a1b7-ddb3ae230e16 - - - - - -] [instance: 7cfaa330-b089-4421-aad5-ee9cdec71c71] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:36:10 compute-0 nova_compute[188777]: 2026-02-19 20:36:10.443 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:10 compute-0 nova_compute[188777]: 2026-02-19 20:36:10.819 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:13 compute-0 nova_compute[188777]: 2026-02-19 20:36:13.738 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:15 compute-0 nova_compute[188777]: 2026-02-19 20:36:15.446 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:16 compute-0 nova_compute[188777]: 2026-02-19 20:36:16.589 188781 DEBUG oslo_concurrency.lockutils [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Acquiring lock "997ebdcf-7eab-485b-8fbf-d21112c78946" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:36:16 compute-0 nova_compute[188777]: 2026-02-19 20:36:16.591 188781 DEBUG oslo_concurrency.lockutils [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Lock "997ebdcf-7eab-485b-8fbf-d21112c78946" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:36:16 compute-0 nova_compute[188777]: 2026-02-19 20:36:16.621 188781 DEBUG nova.compute.manager [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 19 20:36:16 compute-0 nova_compute[188777]: 2026-02-19 20:36:16.635 188781 DEBUG oslo_concurrency.lockutils [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Acquiring lock "3480b144-b674-41b9-bf18-e66e647fbe86" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:36:16 compute-0 nova_compute[188777]: 2026-02-19 20:36:16.636 188781 DEBUG oslo_concurrency.lockutils [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Lock "3480b144-b674-41b9-bf18-e66e647fbe86" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:36:16 compute-0 nova_compute[188777]: 2026-02-19 20:36:16.663 188781 DEBUG nova.compute.manager [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 19 20:36:16 compute-0 nova_compute[188777]: 2026-02-19 20:36:16.738 188781 DEBUG oslo_concurrency.lockutils [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:36:16 compute-0 nova_compute[188777]: 2026-02-19 20:36:16.739 188781 DEBUG oslo_concurrency.lockutils [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:36:16 compute-0 nova_compute[188777]: 2026-02-19 20:36:16.753 188781 DEBUG nova.virt.hardware [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 19 20:36:16 compute-0 nova_compute[188777]: 2026-02-19 20:36:16.754 188781 INFO nova.compute.claims [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Claim successful on node compute-0.ctlplane.example.com
Feb 19 20:36:16 compute-0 nova_compute[188777]: 2026-02-19 20:36:16.757 188781 DEBUG oslo_concurrency.lockutils [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:36:16 compute-0 nova_compute[188777]: 2026-02-19 20:36:16.904 188781 DEBUG nova.compute.provider_tree [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:36:16 compute-0 nova_compute[188777]: 2026-02-19 20:36:16.922 188781 DEBUG nova.scheduler.client.report [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:36:16 compute-0 nova_compute[188777]: 2026-02-19 20:36:16.945 188781 DEBUG oslo_concurrency.lockutils [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.206s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:36:16 compute-0 nova_compute[188777]: 2026-02-19 20:36:16.946 188781 DEBUG nova.compute.manager [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 19 20:36:16 compute-0 nova_compute[188777]: 2026-02-19 20:36:16.948 188781 DEBUG oslo_concurrency.lockutils [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.191s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:36:16 compute-0 nova_compute[188777]: 2026-02-19 20:36:16.957 188781 DEBUG nova.virt.hardware [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 19 20:36:16 compute-0 nova_compute[188777]: 2026-02-19 20:36:16.958 188781 INFO nova.compute.claims [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Claim successful on node compute-0.ctlplane.example.com
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.029 188781 DEBUG nova.compute.manager [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.029 188781 DEBUG nova.network.neutron [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.045 188781 INFO nova.virt.libvirt.driver [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.063 188781 DEBUG nova.compute.manager [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.139 188781 DEBUG nova.compute.provider_tree [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.153 188781 DEBUG nova.scheduler.client.report [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.190 188781 DEBUG nova.compute.manager [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.191 188781 DEBUG nova.virt.libvirt.driver [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.192 188781 INFO nova.virt.libvirt.driver [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Creating image(s)
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.192 188781 DEBUG oslo_concurrency.lockutils [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Acquiring lock "/var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.193 188781 DEBUG oslo_concurrency.lockutils [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Lock "/var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.193 188781 DEBUG oslo_concurrency.lockutils [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Lock "/var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.205 188781 DEBUG oslo_concurrency.lockutils [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.257s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.206 188781 DEBUG nova.compute.manager [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.209 188781 DEBUG oslo_concurrency.processutils [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.252 188781 DEBUG nova.compute.manager [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.253 188781 DEBUG nova.network.neutron [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.258 188781 DEBUG oslo_concurrency.processutils [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.258 188781 DEBUG oslo_concurrency.lockutils [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Acquiring lock "a9fd80910f614000293e8e5ea927829d2f3ef59c" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.259 188781 DEBUG oslo_concurrency.lockutils [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Lock "a9fd80910f614000293e8e5ea927829d2f3ef59c" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.269 188781 DEBUG oslo_concurrency.processutils [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.282 188781 INFO nova.virt.libvirt.driver [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.314 188781 DEBUG oslo_concurrency.processutils [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c --force-share --output=json" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.315 188781 DEBUG oslo_concurrency.processutils [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c,backing_fmt=raw /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.336 188781 DEBUG nova.compute.manager [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.432 188781 DEBUG nova.policy [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '90c9e30d17534357bece36d1acaab39c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '54ce0de2bf12421a9458013ccaa2dcad', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.445 188781 DEBUG nova.compute.manager [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.447 188781 DEBUG nova.virt.libvirt.driver [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.447 188781 INFO nova.virt.libvirt.driver [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Creating image(s)
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.447 188781 DEBUG oslo_concurrency.lockutils [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Acquiring lock "/var/lib/nova/instances/3480b144-b674-41b9-bf18-e66e647fbe86/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.448 188781 DEBUG oslo_concurrency.lockutils [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Lock "/var/lib/nova/instances/3480b144-b674-41b9-bf18-e66e647fbe86/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.449 188781 DEBUG oslo_concurrency.lockutils [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Lock "/var/lib/nova/instances/3480b144-b674-41b9-bf18-e66e647fbe86/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.460 188781 DEBUG oslo_concurrency.processutils [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.512 188781 DEBUG oslo_concurrency.processutils [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.513 188781 DEBUG oslo_concurrency.lockutils [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Acquiring lock "a9fd80910f614000293e8e5ea927829d2f3ef59c" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.726 188781 DEBUG oslo_concurrency.processutils [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c,backing_fmt=raw /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk 1073741824" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.728 188781 DEBUG oslo_concurrency.lockutils [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Lock "a9fd80910f614000293e8e5ea927829d2f3ef59c" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.469s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.729 188781 DEBUG oslo_concurrency.processutils [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.746 188781 DEBUG oslo_concurrency.lockutils [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Lock "a9fd80910f614000293e8e5ea927829d2f3ef59c" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.233s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.772 188781 DEBUG oslo_concurrency.processutils [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.793 188781 DEBUG nova.policy [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'cafdfae88326444da09076b7e3156d58', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'df02c0da56494e34a2e958e403cdd24b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.813 188781 DEBUG oslo_concurrency.processutils [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.815 188781 DEBUG nova.virt.disk.api [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Checking if we can resize image /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.816 188781 DEBUG oslo_concurrency.processutils [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.853 188781 DEBUG oslo_concurrency.processutils [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.855 188781 DEBUG oslo_concurrency.processutils [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c,backing_fmt=raw /var/lib/nova/instances/3480b144-b674-41b9-bf18-e66e647fbe86/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.875 188781 DEBUG oslo_concurrency.processutils [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.877 188781 DEBUG nova.virt.disk.api [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Cannot resize image /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.878 188781 DEBUG nova.objects.instance [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Lazy-loading 'migration_context' on Instance uuid 997ebdcf-7eab-485b-8fbf-d21112c78946 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.905 188781 DEBUG nova.virt.libvirt.driver [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.906 188781 DEBUG nova.virt.libvirt.driver [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Ensure instance console log exists: /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.907 188781 DEBUG oslo_concurrency.lockutils [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.908 188781 DEBUG oslo_concurrency.lockutils [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.908 188781 DEBUG oslo_concurrency.lockutils [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.944 188781 DEBUG oslo_concurrency.processutils [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c,backing_fmt=raw /var/lib/nova/instances/3480b144-b674-41b9-bf18-e66e647fbe86/disk 1073741824" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.945 188781 DEBUG oslo_concurrency.lockutils [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Lock "a9fd80910f614000293e8e5ea927829d2f3ef59c" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.199s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.946 188781 DEBUG oslo_concurrency.processutils [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.995 188781 DEBUG oslo_concurrency.processutils [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.996 188781 DEBUG nova.virt.disk.api [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Checking if we can resize image /var/lib/nova/instances/3480b144-b674-41b9-bf18-e66e647fbe86/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Feb 19 20:36:17 compute-0 nova_compute[188777]: 2026-02-19 20:36:17.997 188781 DEBUG oslo_concurrency.processutils [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3480b144-b674-41b9-bf18-e66e647fbe86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:36:18 compute-0 nova_compute[188777]: 2026-02-19 20:36:18.044 188781 DEBUG oslo_concurrency.processutils [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3480b144-b674-41b9-bf18-e66e647fbe86/disk --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:36:18 compute-0 nova_compute[188777]: 2026-02-19 20:36:18.045 188781 DEBUG nova.virt.disk.api [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Cannot resize image /var/lib/nova/instances/3480b144-b674-41b9-bf18-e66e647fbe86/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Feb 19 20:36:18 compute-0 nova_compute[188777]: 2026-02-19 20:36:18.046 188781 DEBUG nova.objects.instance [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Lazy-loading 'migration_context' on Instance uuid 3480b144-b674-41b9-bf18-e66e647fbe86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:36:18 compute-0 nova_compute[188777]: 2026-02-19 20:36:18.064 188781 DEBUG nova.virt.libvirt.driver [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 19 20:36:18 compute-0 nova_compute[188777]: 2026-02-19 20:36:18.065 188781 DEBUG nova.virt.libvirt.driver [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Ensure instance console log exists: /var/lib/nova/instances/3480b144-b674-41b9-bf18-e66e647fbe86/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 19 20:36:18 compute-0 nova_compute[188777]: 2026-02-19 20:36:18.066 188781 DEBUG oslo_concurrency.lockutils [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:36:18 compute-0 nova_compute[188777]: 2026-02-19 20:36:18.066 188781 DEBUG oslo_concurrency.lockutils [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:36:18 compute-0 nova_compute[188777]: 2026-02-19 20:36:18.067 188781 DEBUG oslo_concurrency.lockutils [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:36:18 compute-0 nova_compute[188777]: 2026-02-19 20:36:18.190 188781 DEBUG nova.network.neutron [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Successfully created port: 44b4451c-db39-42a3-a2c6-5c8c42d1669b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 19 20:36:18 compute-0 nova_compute[188777]: 2026-02-19 20:36:18.741 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:19 compute-0 nova_compute[188777]: 2026-02-19 20:36:19.375 188781 DEBUG nova.network.neutron [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Successfully created port: 1989fec7-60a1-41e3-bd78-56a7bdfdad64 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 19 20:36:19 compute-0 nova_compute[188777]: 2026-02-19 20:36:19.704 188781 DEBUG nova.network.neutron [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Successfully updated port: 44b4451c-db39-42a3-a2c6-5c8c42d1669b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 19 20:36:19 compute-0 nova_compute[188777]: 2026-02-19 20:36:19.742 188781 DEBUG oslo_concurrency.lockutils [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Acquiring lock "refresh_cache-997ebdcf-7eab-485b-8fbf-d21112c78946" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:36:19 compute-0 nova_compute[188777]: 2026-02-19 20:36:19.744 188781 DEBUG oslo_concurrency.lockutils [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Acquired lock "refresh_cache-997ebdcf-7eab-485b-8fbf-d21112c78946" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:36:19 compute-0 nova_compute[188777]: 2026-02-19 20:36:19.744 188781 DEBUG nova.network.neutron [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 19 20:36:20 compute-0 nova_compute[188777]: 2026-02-19 20:36:20.061 188781 DEBUG nova.network.neutron [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 19 20:36:20 compute-0 nova_compute[188777]: 2026-02-19 20:36:20.203 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:20 compute-0 nova_compute[188777]: 2026-02-19 20:36:20.333 188781 DEBUG nova.compute.manager [req-abe89c16-4b0d-4290-8ac1-72dd1d691fcb req-0b9ee1e8-d5f5-477f-9a5b-6d0abfa83122 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Received event network-changed-44b4451c-db39-42a3-a2c6-5c8c42d1669b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:36:20 compute-0 nova_compute[188777]: 2026-02-19 20:36:20.334 188781 DEBUG nova.compute.manager [req-abe89c16-4b0d-4290-8ac1-72dd1d691fcb req-0b9ee1e8-d5f5-477f-9a5b-6d0abfa83122 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Refreshing instance network info cache due to event network-changed-44b4451c-db39-42a3-a2c6-5c8c42d1669b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 19 20:36:20 compute-0 nova_compute[188777]: 2026-02-19 20:36:20.335 188781 DEBUG oslo_concurrency.lockutils [req-abe89c16-4b0d-4290-8ac1-72dd1d691fcb req-0b9ee1e8-d5f5-477f-9a5b-6d0abfa83122 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "refresh_cache-997ebdcf-7eab-485b-8fbf-d21112c78946" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:36:20 compute-0 nova_compute[188777]: 2026-02-19 20:36:20.449 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:20 compute-0 nova_compute[188777]: 2026-02-19 20:36:20.560 188781 DEBUG oslo_concurrency.lockutils [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Acquiring lock "dff9d513-54f8-4d73-acf7-df610dc4d064" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:36:20 compute-0 nova_compute[188777]: 2026-02-19 20:36:20.561 188781 DEBUG oslo_concurrency.lockutils [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Lock "dff9d513-54f8-4d73-acf7-df610dc4d064" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:36:20 compute-0 nova_compute[188777]: 2026-02-19 20:36:20.584 188781 DEBUG nova.compute.manager [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 19 20:36:20 compute-0 nova_compute[188777]: 2026-02-19 20:36:20.714 188781 DEBUG oslo_concurrency.lockutils [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:36:20 compute-0 nova_compute[188777]: 2026-02-19 20:36:20.715 188781 DEBUG oslo_concurrency.lockutils [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:36:20 compute-0 nova_compute[188777]: 2026-02-19 20:36:20.722 188781 DEBUG nova.virt.hardware [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 19 20:36:20 compute-0 nova_compute[188777]: 2026-02-19 20:36:20.723 188781 INFO nova.compute.claims [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Claim successful on node compute-0.ctlplane.example.com
Feb 19 20:36:20 compute-0 nova_compute[188777]: 2026-02-19 20:36:20.935 188781 DEBUG nova.compute.provider_tree [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:36:20 compute-0 nova_compute[188777]: 2026-02-19 20:36:20.954 188781 DEBUG nova.scheduler.client.report [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:36:20 compute-0 nova_compute[188777]: 2026-02-19 20:36:20.988 188781 DEBUG oslo_concurrency.lockutils [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.273s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:36:20 compute-0 nova_compute[188777]: 2026-02-19 20:36:20.990 188781 DEBUG nova.compute.manager [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.064 188781 DEBUG nova.compute.manager [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.065 188781 DEBUG nova.network.neutron [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.086 188781 INFO nova.virt.libvirt.driver [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.102 188781 DEBUG nova.compute.manager [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.242 188781 DEBUG nova.compute.manager [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.245 188781 DEBUG nova.virt.libvirt.driver [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.246 188781 INFO nova.virt.libvirt.driver [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Creating image(s)
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.247 188781 DEBUG oslo_concurrency.lockutils [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Acquiring lock "/var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.248 188781 DEBUG oslo_concurrency.lockutils [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Lock "/var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.249 188781 DEBUG oslo_concurrency.lockutils [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Lock "/var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.268 188781 DEBUG oslo_concurrency.processutils [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.316 188781 DEBUG oslo_concurrency.processutils [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.317 188781 DEBUG oslo_concurrency.lockutils [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Acquiring lock "a9fd80910f614000293e8e5ea927829d2f3ef59c" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.318 188781 DEBUG oslo_concurrency.lockutils [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Lock "a9fd80910f614000293e8e5ea927829d2f3ef59c" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.333 188781 DEBUG oslo_concurrency.processutils [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.386 188781 DEBUG oslo_concurrency.processutils [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.386 188781 DEBUG oslo_concurrency.processutils [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c,backing_fmt=raw /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:36:21 compute-0 podman[253036]: 2026-02-19 20:36:21.417479535 +0000 UTC m=+0.104080276 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.417 188781 DEBUG oslo_concurrency.processutils [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c,backing_fmt=raw /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk 1073741824" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.418 188781 DEBUG oslo_concurrency.lockutils [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Lock "a9fd80910f614000293e8e5ea927829d2f3ef59c" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.419 188781 DEBUG oslo_concurrency.processutils [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:36:21 compute-0 podman[253035]: 2026-02-19 20:36:21.43690148 +0000 UTC m=+0.120810777 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, release=1770267347, build-date=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, distribution-scope=public, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, version=9.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, architecture=x86_64, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.472 188781 DEBUG nova.policy [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ef20d0162e404953a8f45beac9fadf18', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'eb9e3732b9f4456d9f90bf3e156f6f7c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.479 188781 DEBUG oslo_concurrency.processutils [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.480 188781 DEBUG nova.virt.disk.api [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Checking if we can resize image /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.480 188781 DEBUG oslo_concurrency.processutils [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.520 188781 DEBUG nova.network.neutron [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Updating instance_info_cache with network_info: [{"id": "44b4451c-db39-42a3-a2c6-5c8c42d1669b", "address": "fa:16:3e:f7:60:ee", "network": {"id": "ef3fe901-c03c-42fd-97b9-c1f0218f248b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-572210270-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54ce0de2bf12421a9458013ccaa2dcad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44b4451c-db", "ovs_interfaceid": "44b4451c-db39-42a3-a2c6-5c8c42d1669b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.551 188781 DEBUG oslo_concurrency.processutils [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.551 188781 DEBUG nova.virt.disk.api [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Cannot resize image /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.552 188781 DEBUG nova.objects.instance [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Lazy-loading 'migration_context' on Instance uuid dff9d513-54f8-4d73-acf7-df610dc4d064 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.567 188781 DEBUG oslo_concurrency.lockutils [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Releasing lock "refresh_cache-997ebdcf-7eab-485b-8fbf-d21112c78946" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.567 188781 DEBUG nova.compute.manager [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Instance network_info: |[{"id": "44b4451c-db39-42a3-a2c6-5c8c42d1669b", "address": "fa:16:3e:f7:60:ee", "network": {"id": "ef3fe901-c03c-42fd-97b9-c1f0218f248b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-572210270-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54ce0de2bf12421a9458013ccaa2dcad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44b4451c-db", "ovs_interfaceid": "44b4451c-db39-42a3-a2c6-5c8c42d1669b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.569 188781 DEBUG oslo_concurrency.lockutils [req-abe89c16-4b0d-4290-8ac1-72dd1d691fcb req-0b9ee1e8-d5f5-477f-9a5b-6d0abfa83122 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquired lock "refresh_cache-997ebdcf-7eab-485b-8fbf-d21112c78946" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.569 188781 DEBUG nova.network.neutron [req-abe89c16-4b0d-4290-8ac1-72dd1d691fcb req-0b9ee1e8-d5f5-477f-9a5b-6d0abfa83122 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Refreshing network info cache for port 44b4451c-db39-42a3-a2c6-5c8c42d1669b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.575 188781 DEBUG nova.virt.libvirt.driver [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Start _get_guest_xml network_info=[{"id": "44b4451c-db39-42a3-a2c6-5c8c42d1669b", "address": "fa:16:3e:f7:60:ee", "network": {"id": "ef3fe901-c03c-42fd-97b9-c1f0218f248b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-572210270-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54ce0de2bf12421a9458013ccaa2dcad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44b4451c-db", "ovs_interfaceid": "44b4451c-db39-42a3-a2c6-5c8c42d1669b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-19T20:34:24Z,direct_url=<?>,disk_format='qcow2',id=17b9bce8-a91b-495d-ac33-cf63893413f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='59f01dee51a74ac1a9f82733f591827d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-19T20:34:25Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'encryption_secret_uuid': None, 'image_id': '17b9bce8-a91b-495d-ac33-cf63893413f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.576 188781 DEBUG nova.virt.libvirt.driver [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.578 188781 DEBUG nova.virt.libvirt.driver [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Ensure instance console log exists: /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.578 188781 DEBUG oslo_concurrency.lockutils [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.579 188781 DEBUG oslo_concurrency.lockutils [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.579 188781 DEBUG oslo_concurrency.lockutils [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.581 188781 DEBUG nova.network.neutron [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Successfully updated port: 1989fec7-60a1-41e3-bd78-56a7bdfdad64 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.591 188781 WARNING nova.virt.libvirt.driver [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.598 188781 DEBUG nova.virt.libvirt.host [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.599 188781 DEBUG nova.virt.libvirt.host [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.604 188781 DEBUG nova.virt.libvirt.host [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.605 188781 DEBUG nova.virt.libvirt.host [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.605 188781 DEBUG nova.virt.libvirt.driver [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.606 188781 DEBUG nova.virt.hardware [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-19T20:34:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68c4e072-7c2b-48a1-8e07-0fd69e153270',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-19T20:34:24Z,direct_url=<?>,disk_format='qcow2',id=17b9bce8-a91b-495d-ac33-cf63893413f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='59f01dee51a74ac1a9f82733f591827d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-19T20:34:25Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.606 188781 DEBUG nova.virt.hardware [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.606 188781 DEBUG nova.virt.hardware [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.607 188781 DEBUG nova.virt.hardware [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.607 188781 DEBUG nova.virt.hardware [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.607 188781 DEBUG nova.virt.hardware [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.608 188781 DEBUG nova.virt.hardware [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.608 188781 DEBUG nova.virt.hardware [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.608 188781 DEBUG nova.virt.hardware [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.608 188781 DEBUG nova.virt.hardware [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.609 188781 DEBUG nova.virt.hardware [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.613 188781 DEBUG nova.virt.libvirt.vif [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-19T20:36:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-684728485',display_name='tempest-AttachInterfacesUnderV243Test-server-684728485',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-684728485',id=9,image_ref='17b9bce8-a91b-495d-ac33-cf63893413f9',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFCORhgZy1XOw/yh7kQNfGgUFLvc0xo0yRXQlRy7heBBHDZRvZz6Q7+/+lXgISSEziX+sMQC6Xt7mCMIIM139I0Vx00lTjSt0I3YQYckemzRSilpagtWBv83ixwKtgoP7A==',key_name='tempest-keypair-148468656',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='54ce0de2bf12421a9458013ccaa2dcad',ramdisk_id='',reservation_id='r-t1rin612',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='17b9bce8-a91b-495d-ac33-cf63893413f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-291362993',owner_user_name='tempest-AttachInterfacesUnderV243Test-291362993-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-19T20:36:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='90c9e30d17534357bece36d1acaab39c',uuid=997ebdcf-7eab-485b-8fbf-d21112c78946,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "44b4451c-db39-42a3-a2c6-5c8c42d1669b", "address": "fa:16:3e:f7:60:ee", "network": {"id": "ef3fe901-c03c-42fd-97b9-c1f0218f248b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-572210270-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54ce0de2bf12421a9458013ccaa2dcad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44b4451c-db", "ovs_interfaceid": "44b4451c-db39-42a3-a2c6-5c8c42d1669b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.614 188781 DEBUG nova.network.os_vif_util [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Converting VIF {"id": "44b4451c-db39-42a3-a2c6-5c8c42d1669b", "address": "fa:16:3e:f7:60:ee", "network": {"id": "ef3fe901-c03c-42fd-97b9-c1f0218f248b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-572210270-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54ce0de2bf12421a9458013ccaa2dcad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44b4451c-db", "ovs_interfaceid": "44b4451c-db39-42a3-a2c6-5c8c42d1669b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.615 188781 DEBUG nova.network.os_vif_util [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:60:ee,bridge_name='br-int',has_traffic_filtering=True,id=44b4451c-db39-42a3-a2c6-5c8c42d1669b,network=Network(ef3fe901-c03c-42fd-97b9-c1f0218f248b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44b4451c-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.616 188781 DEBUG nova.objects.instance [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Lazy-loading 'pci_devices' on Instance uuid 997ebdcf-7eab-485b-8fbf-d21112c78946 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.644 188781 DEBUG oslo_concurrency.lockutils [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Acquiring lock "refresh_cache-3480b144-b674-41b9-bf18-e66e647fbe86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.644 188781 DEBUG oslo_concurrency.lockutils [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Acquired lock "refresh_cache-3480b144-b674-41b9-bf18-e66e647fbe86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.644 188781 DEBUG nova.network.neutron [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.647 188781 DEBUG nova.virt.libvirt.driver [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] End _get_guest_xml xml=<domain type="kvm">
Feb 19 20:36:21 compute-0 nova_compute[188777]:   <uuid>997ebdcf-7eab-485b-8fbf-d21112c78946</uuid>
Feb 19 20:36:21 compute-0 nova_compute[188777]:   <name>instance-00000009</name>
Feb 19 20:36:21 compute-0 nova_compute[188777]:   <memory>131072</memory>
Feb 19 20:36:21 compute-0 nova_compute[188777]:   <vcpu>1</vcpu>
Feb 19 20:36:21 compute-0 nova_compute[188777]:   <metadata>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 19 20:36:21 compute-0 nova_compute[188777]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:       <nova:name>tempest-AttachInterfacesUnderV243Test-server-684728485</nova:name>
Feb 19 20:36:21 compute-0 nova_compute[188777]:       <nova:creationTime>2026-02-19 20:36:21</nova:creationTime>
Feb 19 20:36:21 compute-0 nova_compute[188777]:       <nova:flavor name="m1.nano">
Feb 19 20:36:21 compute-0 nova_compute[188777]:         <nova:memory>128</nova:memory>
Feb 19 20:36:21 compute-0 nova_compute[188777]:         <nova:disk>1</nova:disk>
Feb 19 20:36:21 compute-0 nova_compute[188777]:         <nova:swap>0</nova:swap>
Feb 19 20:36:21 compute-0 nova_compute[188777]:         <nova:ephemeral>0</nova:ephemeral>
Feb 19 20:36:21 compute-0 nova_compute[188777]:         <nova:vcpus>1</nova:vcpus>
Feb 19 20:36:21 compute-0 nova_compute[188777]:       </nova:flavor>
Feb 19 20:36:21 compute-0 nova_compute[188777]:       <nova:owner>
Feb 19 20:36:21 compute-0 nova_compute[188777]:         <nova:user uuid="90c9e30d17534357bece36d1acaab39c">tempest-AttachInterfacesUnderV243Test-291362993-project-member</nova:user>
Feb 19 20:36:21 compute-0 nova_compute[188777]:         <nova:project uuid="54ce0de2bf12421a9458013ccaa2dcad">tempest-AttachInterfacesUnderV243Test-291362993</nova:project>
Feb 19 20:36:21 compute-0 nova_compute[188777]:       </nova:owner>
Feb 19 20:36:21 compute-0 nova_compute[188777]:       <nova:root type="image" uuid="17b9bce8-a91b-495d-ac33-cf63893413f9"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:       <nova:ports>
Feb 19 20:36:21 compute-0 nova_compute[188777]:         <nova:port uuid="44b4451c-db39-42a3-a2c6-5c8c42d1669b">
Feb 19 20:36:21 compute-0 nova_compute[188777]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:         </nova:port>
Feb 19 20:36:21 compute-0 nova_compute[188777]:       </nova:ports>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     </nova:instance>
Feb 19 20:36:21 compute-0 nova_compute[188777]:   </metadata>
Feb 19 20:36:21 compute-0 nova_compute[188777]:   <sysinfo type="smbios">
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <system>
Feb 19 20:36:21 compute-0 nova_compute[188777]:       <entry name="manufacturer">RDO</entry>
Feb 19 20:36:21 compute-0 nova_compute[188777]:       <entry name="product">OpenStack Compute</entry>
Feb 19 20:36:21 compute-0 nova_compute[188777]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 19 20:36:21 compute-0 nova_compute[188777]:       <entry name="serial">997ebdcf-7eab-485b-8fbf-d21112c78946</entry>
Feb 19 20:36:21 compute-0 nova_compute[188777]:       <entry name="uuid">997ebdcf-7eab-485b-8fbf-d21112c78946</entry>
Feb 19 20:36:21 compute-0 nova_compute[188777]:       <entry name="family">Virtual Machine</entry>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     </system>
Feb 19 20:36:21 compute-0 nova_compute[188777]:   </sysinfo>
Feb 19 20:36:21 compute-0 nova_compute[188777]:   <os>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <boot dev="hd"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <smbios mode="sysinfo"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:   </os>
Feb 19 20:36:21 compute-0 nova_compute[188777]:   <features>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <acpi/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <apic/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <vmcoreinfo/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:   </features>
Feb 19 20:36:21 compute-0 nova_compute[188777]:   <clock offset="utc">
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <timer name="pit" tickpolicy="delay"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <timer name="hpet" present="no"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:   </clock>
Feb 19 20:36:21 compute-0 nova_compute[188777]:   <cpu mode="host-model" match="exact">
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <topology sockets="1" cores="1" threads="1"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:   </cpu>
Feb 19 20:36:21 compute-0 nova_compute[188777]:   <devices>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <disk type="file" device="disk">
Feb 19 20:36:21 compute-0 nova_compute[188777]:       <driver name="qemu" type="qcow2" cache="none"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:       <source file="/var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:       <target dev="vda" bus="virtio"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     </disk>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <disk type="file" device="cdrom">
Feb 19 20:36:21 compute-0 nova_compute[188777]:       <driver name="qemu" type="raw" cache="none"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:       <source file="/var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk.config"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:       <target dev="sda" bus="sata"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     </disk>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <interface type="ethernet">
Feb 19 20:36:21 compute-0 nova_compute[188777]:       <mac address="fa:16:3e:f7:60:ee"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:       <model type="virtio"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:       <driver name="vhost" rx_queue_size="512"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:       <mtu size="1442"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:       <target dev="tap44b4451c-db"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     </interface>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <serial type="pty">
Feb 19 20:36:21 compute-0 nova_compute[188777]:       <log file="/var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/console.log" append="off"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     </serial>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <video>
Feb 19 20:36:21 compute-0 nova_compute[188777]:       <model type="virtio"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     </video>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <input type="tablet" bus="usb"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <rng model="virtio">
Feb 19 20:36:21 compute-0 nova_compute[188777]:       <backend model="random">/dev/urandom</backend>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     </rng>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <controller type="usb" index="0"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     <memballoon model="virtio">
Feb 19 20:36:21 compute-0 nova_compute[188777]:       <stats period="10"/>
Feb 19 20:36:21 compute-0 nova_compute[188777]:     </memballoon>
Feb 19 20:36:21 compute-0 nova_compute[188777]:   </devices>
Feb 19 20:36:21 compute-0 nova_compute[188777]: </domain>
Feb 19 20:36:21 compute-0 nova_compute[188777]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.648 188781 DEBUG nova.compute.manager [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Preparing to wait for external event network-vif-plugged-44b4451c-db39-42a3-a2c6-5c8c42d1669b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.648 188781 DEBUG oslo_concurrency.lockutils [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Acquiring lock "997ebdcf-7eab-485b-8fbf-d21112c78946-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.648 188781 DEBUG oslo_concurrency.lockutils [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Lock "997ebdcf-7eab-485b-8fbf-d21112c78946-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.649 188781 DEBUG oslo_concurrency.lockutils [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Lock "997ebdcf-7eab-485b-8fbf-d21112c78946-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.650 188781 DEBUG nova.virt.libvirt.vif [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-19T20:36:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-684728485',display_name='tempest-AttachInterfacesUnderV243Test-server-684728485',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-684728485',id=9,image_ref='17b9bce8-a91b-495d-ac33-cf63893413f9',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFCORhgZy1XOw/yh7kQNfGgUFLvc0xo0yRXQlRy7heBBHDZRvZz6Q7+/+lXgISSEziX+sMQC6Xt7mCMIIM139I0Vx00lTjSt0I3YQYckemzRSilpagtWBv83ixwKtgoP7A==',key_name='tempest-keypair-148468656',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='54ce0de2bf12421a9458013ccaa2dcad',ramdisk_id='',reservation_id='r-t1rin612',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='17b9bce8-a91b-495d-ac33-cf63893413f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-291362993',owner_user_name='tempest-AttachInterfacesUnderV243Test-291362993-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-19T20:36:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='90c9e30d17534357bece36d1acaab39c',uuid=997ebdcf-7eab-485b-8fbf-d21112c78946,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "44b4451c-db39-42a3-a2c6-5c8c42d1669b", "address": "fa:16:3e:f7:60:ee", "network": {"id": "ef3fe901-c03c-42fd-97b9-c1f0218f248b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-572210270-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54ce0de2bf12421a9458013ccaa2dcad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44b4451c-db", "ovs_interfaceid": "44b4451c-db39-42a3-a2c6-5c8c42d1669b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.650 188781 DEBUG nova.network.os_vif_util [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Converting VIF {"id": "44b4451c-db39-42a3-a2c6-5c8c42d1669b", "address": "fa:16:3e:f7:60:ee", "network": {"id": "ef3fe901-c03c-42fd-97b9-c1f0218f248b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-572210270-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54ce0de2bf12421a9458013ccaa2dcad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44b4451c-db", "ovs_interfaceid": "44b4451c-db39-42a3-a2c6-5c8c42d1669b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.651 188781 DEBUG nova.network.os_vif_util [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:60:ee,bridge_name='br-int',has_traffic_filtering=True,id=44b4451c-db39-42a3-a2c6-5c8c42d1669b,network=Network(ef3fe901-c03c-42fd-97b9-c1f0218f248b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44b4451c-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.651 188781 DEBUG os_vif [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:60:ee,bridge_name='br-int',has_traffic_filtering=True,id=44b4451c-db39-42a3-a2c6-5c8c42d1669b,network=Network(ef3fe901-c03c-42fd-97b9-c1f0218f248b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44b4451c-db') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.652 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.653 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.653 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.657 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.657 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap44b4451c-db, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.658 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap44b4451c-db, col_values=(('external_ids', {'iface-id': '44b4451c-db39-42a3-a2c6-5c8c42d1669b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f7:60:ee', 'vm-uuid': '997ebdcf-7eab-485b-8fbf-d21112c78946'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:36:21 compute-0 NetworkManager[57033]: <info>  [1771533381.6610] manager: (tap44b4451c-db): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.659 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.670 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.672 188781 INFO os_vif [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:60:ee,bridge_name='br-int',has_traffic_filtering=True,id=44b4451c-db39-42a3-a2c6-5c8c42d1669b,network=Network(ef3fe901-c03c-42fd-97b9-c1f0218f248b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44b4451c-db')
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.734 188781 DEBUG nova.virt.libvirt.driver [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.734 188781 DEBUG nova.virt.libvirt.driver [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.735 188781 DEBUG nova.virt.libvirt.driver [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] No VIF found with MAC fa:16:3e:f7:60:ee, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.736 188781 INFO nova.virt.libvirt.driver [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Using config drive
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.889 188781 DEBUG nova.compute.manager [req-426d341e-5e91-408e-8ee4-14333a5c88bd req-f26c4e13-0f97-4d0c-846a-89c8e80988e5 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Received event network-changed-1989fec7-60a1-41e3-bd78-56a7bdfdad64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.889 188781 DEBUG nova.compute.manager [req-426d341e-5e91-408e-8ee4-14333a5c88bd req-f26c4e13-0f97-4d0c-846a-89c8e80988e5 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Refreshing instance network info cache due to event network-changed-1989fec7-60a1-41e3-bd78-56a7bdfdad64. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 19 20:36:21 compute-0 nova_compute[188777]: 2026-02-19 20:36:21.889 188781 DEBUG oslo_concurrency.lockutils [req-426d341e-5e91-408e-8ee4-14333a5c88bd req-f26c4e13-0f97-4d0c-846a-89c8e80988e5 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "refresh_cache-3480b144-b674-41b9-bf18-e66e647fbe86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:36:22 compute-0 nova_compute[188777]: 2026-02-19 20:36:22.168 188781 DEBUG nova.network.neutron [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 19 20:36:22 compute-0 nova_compute[188777]: 2026-02-19 20:36:22.584 188781 INFO nova.virt.libvirt.driver [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Creating config drive at /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk.config
Feb 19 20:36:22 compute-0 nova_compute[188777]: 2026-02-19 20:36:22.590 188781 DEBUG oslo_concurrency.processutils [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpq_qcfper execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:36:22 compute-0 nova_compute[188777]: 2026-02-19 20:36:22.719 188781 DEBUG oslo_concurrency.processutils [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpq_qcfper" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:36:22 compute-0 kernel: tap44b4451c-db: entered promiscuous mode
Feb 19 20:36:22 compute-0 NetworkManager[57033]: <info>  [1771533382.7772] manager: (tap44b4451c-db): new Tun device (/org/freedesktop/NetworkManager/Devices/48)
Feb 19 20:36:22 compute-0 ovn_controller[98843]: 2026-02-19T20:36:22Z|00096|binding|INFO|Claiming lport 44b4451c-db39-42a3-a2c6-5c8c42d1669b for this chassis.
Feb 19 20:36:22 compute-0 nova_compute[188777]: 2026-02-19 20:36:22.778 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:22 compute-0 ovn_controller[98843]: 2026-02-19T20:36:22Z|00097|binding|INFO|44b4451c-db39-42a3-a2c6-5c8c42d1669b: Claiming fa:16:3e:f7:60:ee 10.100.0.3
Feb 19 20:36:22 compute-0 nova_compute[188777]: 2026-02-19 20:36:22.785 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:22 compute-0 nova_compute[188777]: 2026-02-19 20:36:22.786 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:22 compute-0 nova_compute[188777]: 2026-02-19 20:36:22.788 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:22 compute-0 ovn_controller[98843]: 2026-02-19T20:36:22Z|00098|binding|INFO|Setting lport 44b4451c-db39-42a3-a2c6-5c8c42d1669b ovn-installed in OVS
Feb 19 20:36:22 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:22.790 108175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:60:ee 10.100.0.3'], port_security=['fa:16:3e:f7:60:ee 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '997ebdcf-7eab-485b-8fbf-d21112c78946', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef3fe901-c03c-42fd-97b9-c1f0218f248b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '54ce0de2bf12421a9458013ccaa2dcad', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c06f1cf7-9a65-46de-9e45-c457eaa74f4b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=de21cb3b-b665-4b1e-a28c-20eb890485be, chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>], logical_port=44b4451c-db39-42a3-a2c6-5c8c42d1669b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 19 20:36:22 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:22.791 108175 INFO neutron.agent.ovn.metadata.agent [-] Port 44b4451c-db39-42a3-a2c6-5c8c42d1669b in datapath ef3fe901-c03c-42fd-97b9-c1f0218f248b bound to our chassis
Feb 19 20:36:22 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:22.792 108175 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ef3fe901-c03c-42fd-97b9-c1f0218f248b
Feb 19 20:36:22 compute-0 ovn_controller[98843]: 2026-02-19T20:36:22Z|00099|binding|INFO|Setting lport 44b4451c-db39-42a3-a2c6-5c8c42d1669b up in Southbound
Feb 19 20:36:22 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:22.803 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[0f25d9e9-efa3-4a67-8dc5-1bcc387129b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:22 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:22.804 108175 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapef3fe901-c1 in ovnmeta-ef3fe901-c03c-42fd-97b9-c1f0218f248b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 19 20:36:22 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:22.805 242160 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapef3fe901-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 19 20:36:22 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:22.805 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[16a84eaf-3a6a-435f-9063-2945a79d0879]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:22 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:22.807 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[c7f758ee-6932-4517-8a09-2fe97430a9d9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:22 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:22.815 108698 DEBUG oslo.privsep.daemon [-] privsep: reply[c5a38a19-8940-40f6-8ede-af1a8ccabbed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:22 compute-0 systemd-machined[158158]: New machine qemu-9-instance-00000009.
Feb 19 20:36:22 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:22.829 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[a19c7d1d-885c-4183-bfc0-f0f1e7108b80]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:22 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Feb 19 20:36:22 compute-0 systemd-udevd[253118]: Network interface NamePolicy= disabled on kernel command line.
Feb 19 20:36:22 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:22.855 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[c572317b-7138-44fa-bcd8-3352e4ad5091]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:22 compute-0 NetworkManager[57033]: <info>  [1771533382.8610] device (tap44b4451c-db): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 19 20:36:22 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:22.863 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[d73cc0b3-037a-4586-be1a-adeedcc0c751]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:22 compute-0 NetworkManager[57033]: <info>  [1771533382.8671] manager: (tapef3fe901-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/49)
Feb 19 20:36:22 compute-0 NetworkManager[57033]: <info>  [1771533382.8694] device (tap44b4451c-db): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 19 20:36:22 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:22.889 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[9f0b4f11-2ee3-4645-a717-df2dc044b998]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:22 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:22.893 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[3b1c5f40-bb68-4595-859f-881fe9d2a276]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:22 compute-0 NetworkManager[57033]: <info>  [1771533382.9141] device (tapef3fe901-c0): carrier: link connected
Feb 19 20:36:22 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:22.919 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[fcd7fc6e-0aa3-4938-b70e-6a52bcc8f054]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:22 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:22.932 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[628dbc5e-6de9-42c7-ab08-adefed6bc66c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef3fe901-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:41:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489437, 'reachable_time': 30952, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253148, 'error': None, 'target': 'ovnmeta-ef3fe901-c03c-42fd-97b9-c1f0218f248b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:22 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:22.945 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[80ad76c2-10f6-44f4-bdca-aa41828d1203]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed2:415f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 489437, 'tstamp': 489437}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253149, 'error': None, 'target': 'ovnmeta-ef3fe901-c03c-42fd-97b9-c1f0218f248b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:22 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:22.960 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[56d775ab-0a7b-48fd-a995-3885bd01145d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef3fe901-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:41:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489437, 'reachable_time': 30952, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253151, 'error': None, 'target': 'ovnmeta-ef3fe901-c03c-42fd-97b9-c1f0218f248b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:22 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:22.985 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[09afe168-22fc-4e2a-bf28-3ae5f2c57412]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:23 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:23.037 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[77b53a5a-ed61-418e-a8b1-c6ff0efa0480]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:23 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:23.041 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef3fe901-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:36:23 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:23.041 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 19 20:36:23 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:23.042 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef3fe901-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:36:23 compute-0 kernel: tapef3fe901-c0: entered promiscuous mode
Feb 19 20:36:23 compute-0 NetworkManager[57033]: <info>  [1771533383.0470] manager: (tapef3fe901-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Feb 19 20:36:23 compute-0 nova_compute[188777]: 2026-02-19 20:36:23.050 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:23 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:23.056 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapef3fe901-c0, col_values=(('external_ids', {'iface-id': '55b38ec7-c28d-4985-87ac-ac8d24f4e97c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:36:23 compute-0 ovn_controller[98843]: 2026-02-19T20:36:23Z|00100|binding|INFO|Releasing lport 55b38ec7-c28d-4985-87ac-ac8d24f4e97c from this chassis (sb_readonly=0)
Feb 19 20:36:23 compute-0 nova_compute[188777]: 2026-02-19 20:36:23.059 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:23 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:23.060 108175 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ef3fe901-c03c-42fd-97b9-c1f0218f248b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ef3fe901-c03c-42fd-97b9-c1f0218f248b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 19 20:36:23 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:23.061 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[6224de14-f414-4b33-a818-0aac28dda139]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:23 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:23.062 108175 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 19 20:36:23 compute-0 ovn_metadata_agent[108170]: global
Feb 19 20:36:23 compute-0 ovn_metadata_agent[108170]:     log         /dev/log local0 debug
Feb 19 20:36:23 compute-0 ovn_metadata_agent[108170]:     log-tag     haproxy-metadata-proxy-ef3fe901-c03c-42fd-97b9-c1f0218f248b
Feb 19 20:36:23 compute-0 ovn_metadata_agent[108170]:     user        root
Feb 19 20:36:23 compute-0 ovn_metadata_agent[108170]:     group       root
Feb 19 20:36:23 compute-0 ovn_metadata_agent[108170]:     maxconn     1024
Feb 19 20:36:23 compute-0 ovn_metadata_agent[108170]:     pidfile     /var/lib/neutron/external/pids/ef3fe901-c03c-42fd-97b9-c1f0218f248b.pid.haproxy
Feb 19 20:36:23 compute-0 ovn_metadata_agent[108170]:     daemon
Feb 19 20:36:23 compute-0 ovn_metadata_agent[108170]: 
Feb 19 20:36:23 compute-0 ovn_metadata_agent[108170]: defaults
Feb 19 20:36:23 compute-0 ovn_metadata_agent[108170]:     log global
Feb 19 20:36:23 compute-0 ovn_metadata_agent[108170]:     mode http
Feb 19 20:36:23 compute-0 ovn_metadata_agent[108170]:     option httplog
Feb 19 20:36:23 compute-0 ovn_metadata_agent[108170]:     option dontlognull
Feb 19 20:36:23 compute-0 ovn_metadata_agent[108170]:     option http-server-close
Feb 19 20:36:23 compute-0 ovn_metadata_agent[108170]:     option forwardfor
Feb 19 20:36:23 compute-0 ovn_metadata_agent[108170]:     retries                 3
Feb 19 20:36:23 compute-0 ovn_metadata_agent[108170]:     timeout http-request    30s
Feb 19 20:36:23 compute-0 ovn_metadata_agent[108170]:     timeout connect         30s
Feb 19 20:36:23 compute-0 ovn_metadata_agent[108170]:     timeout client          32s
Feb 19 20:36:23 compute-0 ovn_metadata_agent[108170]:     timeout server          32s
Feb 19 20:36:23 compute-0 ovn_metadata_agent[108170]:     timeout http-keep-alive 30s
Feb 19 20:36:23 compute-0 ovn_metadata_agent[108170]: 
Feb 19 20:36:23 compute-0 ovn_metadata_agent[108170]: 
Feb 19 20:36:23 compute-0 ovn_metadata_agent[108170]: listen listener
Feb 19 20:36:23 compute-0 ovn_metadata_agent[108170]:     bind 169.254.169.254:80
Feb 19 20:36:23 compute-0 ovn_metadata_agent[108170]:     server metadata /var/lib/neutron/metadata_proxy
Feb 19 20:36:23 compute-0 ovn_metadata_agent[108170]:     http-request add-header X-OVN-Network-ID ef3fe901-c03c-42fd-97b9-c1f0218f248b
Feb 19 20:36:23 compute-0 ovn_metadata_agent[108170]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 19 20:36:23 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:23.062 108175 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ef3fe901-c03c-42fd-97b9-c1f0218f248b', 'env', 'PROCESS_TAG=haproxy-ef3fe901-c03c-42fd-97b9-c1f0218f248b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ef3fe901-c03c-42fd-97b9-c1f0218f248b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 19 20:36:23 compute-0 nova_compute[188777]: 2026-02-19 20:36:23.257 188781 DEBUG nova.virt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Emitting event <LifecycleEvent: 1771533383.2572603, 997ebdcf-7eab-485b-8fbf-d21112c78946 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:36:23 compute-0 nova_compute[188777]: 2026-02-19 20:36:23.257 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] VM Started (Lifecycle Event)
Feb 19 20:36:23 compute-0 nova_compute[188777]: 2026-02-19 20:36:23.289 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:36:23 compute-0 nova_compute[188777]: 2026-02-19 20:36:23.295 188781 DEBUG nova.virt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Emitting event <LifecycleEvent: 1771533383.2573478, 997ebdcf-7eab-485b-8fbf-d21112c78946 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:36:23 compute-0 nova_compute[188777]: 2026-02-19 20:36:23.296 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] VM Paused (Lifecycle Event)
Feb 19 20:36:23 compute-0 nova_compute[188777]: 2026-02-19 20:36:23.341 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:36:23 compute-0 nova_compute[188777]: 2026-02-19 20:36:23.346 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 19 20:36:23 compute-0 nova_compute[188777]: 2026-02-19 20:36:23.366 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 19 20:36:23 compute-0 podman[253197]: 2026-02-19 20:36:23.446604457 +0000 UTC m=+0.052746215 container create 96214c45f14b16eb4bbfb9dde3005c6b96ebeadeb9a39d506a9c22e50aa4e149 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ef3fe901-c03c-42fd-97b9-c1f0218f248b, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Feb 19 20:36:23 compute-0 systemd[1]: Started libpod-conmon-96214c45f14b16eb4bbfb9dde3005c6b96ebeadeb9a39d506a9c22e50aa4e149.scope.
Feb 19 20:36:23 compute-0 podman[253197]: 2026-02-19 20:36:23.421341919 +0000 UTC m=+0.027483567 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 19 20:36:23 compute-0 systemd[1]: Started libcrun container.
Feb 19 20:36:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29230002d6b7cc237a54e44499ef324b6292f75dc49f894f0d54c4138a9f917a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 19 20:36:23 compute-0 podman[253197]: 2026-02-19 20:36:23.542685631 +0000 UTC m=+0.148827269 container init 96214c45f14b16eb4bbfb9dde3005c6b96ebeadeb9a39d506a9c22e50aa4e149 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ef3fe901-c03c-42fd-97b9-c1f0218f248b, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 19 20:36:23 compute-0 podman[253197]: 2026-02-19 20:36:23.548647377 +0000 UTC m=+0.154788995 container start 96214c45f14b16eb4bbfb9dde3005c6b96ebeadeb9a39d506a9c22e50aa4e149 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ef3fe901-c03c-42fd-97b9-c1f0218f248b, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Feb 19 20:36:23 compute-0 neutron-haproxy-ovnmeta-ef3fe901-c03c-42fd-97b9-c1f0218f248b[253211]: [NOTICE]   (253215) : New worker (253217) forked
Feb 19 20:36:23 compute-0 neutron-haproxy-ovnmeta-ef3fe901-c03c-42fd-97b9-c1f0218f248b[253211]: [NOTICE]   (253215) : Loading success.
Feb 19 20:36:23 compute-0 nova_compute[188777]: 2026-02-19 20:36:23.744 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:24 compute-0 ovn_controller[98843]: 2026-02-19T20:36:24Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c6:08:9f 10.100.0.13
Feb 19 20:36:24 compute-0 ovn_controller[98843]: 2026-02-19T20:36:24Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c6:08:9f 10.100.0.13
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.283 188781 DEBUG nova.network.neutron [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Updating instance_info_cache with network_info: [{"id": "1989fec7-60a1-41e3-bd78-56a7bdfdad64", "address": "fa:16:3e:f9:a3:57", "network": {"id": "580376f8-802f-4dbc-bd1f-d9322eb2ee71", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1533329539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df02c0da56494e34a2e958e403cdd24b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1989fec7-60", "ovs_interfaceid": "1989fec7-60a1-41e3-bd78-56a7bdfdad64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.318 188781 DEBUG oslo_concurrency.lockutils [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Releasing lock "refresh_cache-3480b144-b674-41b9-bf18-e66e647fbe86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.319 188781 DEBUG nova.compute.manager [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Instance network_info: |[{"id": "1989fec7-60a1-41e3-bd78-56a7bdfdad64", "address": "fa:16:3e:f9:a3:57", "network": {"id": "580376f8-802f-4dbc-bd1f-d9322eb2ee71", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1533329539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df02c0da56494e34a2e958e403cdd24b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1989fec7-60", "ovs_interfaceid": "1989fec7-60a1-41e3-bd78-56a7bdfdad64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.321 188781 DEBUG oslo_concurrency.lockutils [req-426d341e-5e91-408e-8ee4-14333a5c88bd req-f26c4e13-0f97-4d0c-846a-89c8e80988e5 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquired lock "refresh_cache-3480b144-b674-41b9-bf18-e66e647fbe86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.323 188781 DEBUG nova.network.neutron [req-426d341e-5e91-408e-8ee4-14333a5c88bd req-f26c4e13-0f97-4d0c-846a-89c8e80988e5 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Refreshing network info cache for port 1989fec7-60a1-41e3-bd78-56a7bdfdad64 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.329 188781 DEBUG nova.virt.libvirt.driver [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Start _get_guest_xml network_info=[{"id": "1989fec7-60a1-41e3-bd78-56a7bdfdad64", "address": "fa:16:3e:f9:a3:57", "network": {"id": "580376f8-802f-4dbc-bd1f-d9322eb2ee71", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1533329539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df02c0da56494e34a2e958e403cdd24b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1989fec7-60", "ovs_interfaceid": "1989fec7-60a1-41e3-bd78-56a7bdfdad64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-19T20:34:24Z,direct_url=<?>,disk_format='qcow2',id=17b9bce8-a91b-495d-ac33-cf63893413f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='59f01dee51a74ac1a9f82733f591827d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-19T20:34:25Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'encryption_secret_uuid': None, 'image_id': '17b9bce8-a91b-495d-ac33-cf63893413f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.339 188781 WARNING nova.virt.libvirt.driver [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.345 188781 DEBUG nova.virt.libvirt.host [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.346 188781 DEBUG nova.virt.libvirt.host [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.353 188781 DEBUG nova.virt.libvirt.host [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.353 188781 DEBUG nova.virt.libvirt.host [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.354 188781 DEBUG nova.virt.libvirt.driver [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.354 188781 DEBUG nova.virt.hardware [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-19T20:34:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68c4e072-7c2b-48a1-8e07-0fd69e153270',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-19T20:34:24Z,direct_url=<?>,disk_format='qcow2',id=17b9bce8-a91b-495d-ac33-cf63893413f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='59f01dee51a74ac1a9f82733f591827d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-19T20:34:25Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.354 188781 DEBUG nova.virt.hardware [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.355 188781 DEBUG nova.virt.hardware [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.355 188781 DEBUG nova.virt.hardware [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.355 188781 DEBUG nova.virt.hardware [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.356 188781 DEBUG nova.virt.hardware [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.356 188781 DEBUG nova.virt.hardware [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.357 188781 DEBUG nova.virt.hardware [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.358 188781 DEBUG nova.virt.hardware [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.358 188781 DEBUG nova.virt.hardware [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.358 188781 DEBUG nova.virt.hardware [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.363 188781 DEBUG nova.virt.libvirt.vif [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-19T20:36:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1666570736',display_name='tempest-ServerAddressesTestJSON-server-1666570736',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1666570736',id=10,image_ref='17b9bce8-a91b-495d-ac33-cf63893413f9',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df02c0da56494e34a2e958e403cdd24b',ramdisk_id='',reservation_id='r-3ze12ez9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='17b9bce8-a91b-495d-ac33-cf63893413f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1073436659',owner_user_name='tempest-ServerAddressesTestJSON-1073436659-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-19T20:36:17Z,user_data=None,user_id='cafdfae88326444da09076b7e3156d58',uuid=3480b144-b674-41b9-bf18-e66e647fbe86,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1989fec7-60a1-41e3-bd78-56a7bdfdad64", "address": "fa:16:3e:f9:a3:57", "network": {"id": "580376f8-802f-4dbc-bd1f-d9322eb2ee71", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1533329539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df02c0da56494e34a2e958e403cdd24b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1989fec7-60", "ovs_interfaceid": "1989fec7-60a1-41e3-bd78-56a7bdfdad64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.363 188781 DEBUG nova.network.os_vif_util [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Converting VIF {"id": "1989fec7-60a1-41e3-bd78-56a7bdfdad64", "address": "fa:16:3e:f9:a3:57", "network": {"id": "580376f8-802f-4dbc-bd1f-d9322eb2ee71", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1533329539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df02c0da56494e34a2e958e403cdd24b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1989fec7-60", "ovs_interfaceid": "1989fec7-60a1-41e3-bd78-56a7bdfdad64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.364 188781 DEBUG nova.network.os_vif_util [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:a3:57,bridge_name='br-int',has_traffic_filtering=True,id=1989fec7-60a1-41e3-bd78-56a7bdfdad64,network=Network(580376f8-802f-4dbc-bd1f-d9322eb2ee71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1989fec7-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.365 188781 DEBUG nova.objects.instance [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Lazy-loading 'pci_devices' on Instance uuid 3480b144-b674-41b9-bf18-e66e647fbe86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.380 188781 DEBUG nova.virt.libvirt.driver [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] End _get_guest_xml xml=<domain type="kvm">
Feb 19 20:36:24 compute-0 nova_compute[188777]:   <uuid>3480b144-b674-41b9-bf18-e66e647fbe86</uuid>
Feb 19 20:36:24 compute-0 nova_compute[188777]:   <name>instance-0000000a</name>
Feb 19 20:36:24 compute-0 nova_compute[188777]:   <memory>131072</memory>
Feb 19 20:36:24 compute-0 nova_compute[188777]:   <vcpu>1</vcpu>
Feb 19 20:36:24 compute-0 nova_compute[188777]:   <metadata>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 19 20:36:24 compute-0 nova_compute[188777]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:       <nova:name>tempest-ServerAddressesTestJSON-server-1666570736</nova:name>
Feb 19 20:36:24 compute-0 nova_compute[188777]:       <nova:creationTime>2026-02-19 20:36:24</nova:creationTime>
Feb 19 20:36:24 compute-0 nova_compute[188777]:       <nova:flavor name="m1.nano">
Feb 19 20:36:24 compute-0 nova_compute[188777]:         <nova:memory>128</nova:memory>
Feb 19 20:36:24 compute-0 nova_compute[188777]:         <nova:disk>1</nova:disk>
Feb 19 20:36:24 compute-0 nova_compute[188777]:         <nova:swap>0</nova:swap>
Feb 19 20:36:24 compute-0 nova_compute[188777]:         <nova:ephemeral>0</nova:ephemeral>
Feb 19 20:36:24 compute-0 nova_compute[188777]:         <nova:vcpus>1</nova:vcpus>
Feb 19 20:36:24 compute-0 nova_compute[188777]:       </nova:flavor>
Feb 19 20:36:24 compute-0 nova_compute[188777]:       <nova:owner>
Feb 19 20:36:24 compute-0 nova_compute[188777]:         <nova:user uuid="cafdfae88326444da09076b7e3156d58">tempest-ServerAddressesTestJSON-1073436659-project-member</nova:user>
Feb 19 20:36:24 compute-0 nova_compute[188777]:         <nova:project uuid="df02c0da56494e34a2e958e403cdd24b">tempest-ServerAddressesTestJSON-1073436659</nova:project>
Feb 19 20:36:24 compute-0 nova_compute[188777]:       </nova:owner>
Feb 19 20:36:24 compute-0 nova_compute[188777]:       <nova:root type="image" uuid="17b9bce8-a91b-495d-ac33-cf63893413f9"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:       <nova:ports>
Feb 19 20:36:24 compute-0 nova_compute[188777]:         <nova:port uuid="1989fec7-60a1-41e3-bd78-56a7bdfdad64">
Feb 19 20:36:24 compute-0 nova_compute[188777]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:         </nova:port>
Feb 19 20:36:24 compute-0 nova_compute[188777]:       </nova:ports>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     </nova:instance>
Feb 19 20:36:24 compute-0 nova_compute[188777]:   </metadata>
Feb 19 20:36:24 compute-0 nova_compute[188777]:   <sysinfo type="smbios">
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <system>
Feb 19 20:36:24 compute-0 nova_compute[188777]:       <entry name="manufacturer">RDO</entry>
Feb 19 20:36:24 compute-0 nova_compute[188777]:       <entry name="product">OpenStack Compute</entry>
Feb 19 20:36:24 compute-0 nova_compute[188777]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 19 20:36:24 compute-0 nova_compute[188777]:       <entry name="serial">3480b144-b674-41b9-bf18-e66e647fbe86</entry>
Feb 19 20:36:24 compute-0 nova_compute[188777]:       <entry name="uuid">3480b144-b674-41b9-bf18-e66e647fbe86</entry>
Feb 19 20:36:24 compute-0 nova_compute[188777]:       <entry name="family">Virtual Machine</entry>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     </system>
Feb 19 20:36:24 compute-0 nova_compute[188777]:   </sysinfo>
Feb 19 20:36:24 compute-0 nova_compute[188777]:   <os>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <boot dev="hd"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <smbios mode="sysinfo"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:   </os>
Feb 19 20:36:24 compute-0 nova_compute[188777]:   <features>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <acpi/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <apic/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <vmcoreinfo/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:   </features>
Feb 19 20:36:24 compute-0 nova_compute[188777]:   <clock offset="utc">
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <timer name="pit" tickpolicy="delay"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <timer name="hpet" present="no"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:   </clock>
Feb 19 20:36:24 compute-0 nova_compute[188777]:   <cpu mode="host-model" match="exact">
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <topology sockets="1" cores="1" threads="1"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:   </cpu>
Feb 19 20:36:24 compute-0 nova_compute[188777]:   <devices>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <disk type="file" device="disk">
Feb 19 20:36:24 compute-0 nova_compute[188777]:       <driver name="qemu" type="qcow2" cache="none"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:       <source file="/var/lib/nova/instances/3480b144-b674-41b9-bf18-e66e647fbe86/disk"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:       <target dev="vda" bus="virtio"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     </disk>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <disk type="file" device="cdrom">
Feb 19 20:36:24 compute-0 nova_compute[188777]:       <driver name="qemu" type="raw" cache="none"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:       <source file="/var/lib/nova/instances/3480b144-b674-41b9-bf18-e66e647fbe86/disk.config"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:       <target dev="sda" bus="sata"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     </disk>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <interface type="ethernet">
Feb 19 20:36:24 compute-0 nova_compute[188777]:       <mac address="fa:16:3e:f9:a3:57"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:       <model type="virtio"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:       <driver name="vhost" rx_queue_size="512"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:       <mtu size="1442"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:       <target dev="tap1989fec7-60"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     </interface>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <serial type="pty">
Feb 19 20:36:24 compute-0 nova_compute[188777]:       <log file="/var/lib/nova/instances/3480b144-b674-41b9-bf18-e66e647fbe86/console.log" append="off"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     </serial>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <video>
Feb 19 20:36:24 compute-0 nova_compute[188777]:       <model type="virtio"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     </video>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <input type="tablet" bus="usb"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <rng model="virtio">
Feb 19 20:36:24 compute-0 nova_compute[188777]:       <backend model="random">/dev/urandom</backend>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     </rng>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <controller type="usb" index="0"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     <memballoon model="virtio">
Feb 19 20:36:24 compute-0 nova_compute[188777]:       <stats period="10"/>
Feb 19 20:36:24 compute-0 nova_compute[188777]:     </memballoon>
Feb 19 20:36:24 compute-0 nova_compute[188777]:   </devices>
Feb 19 20:36:24 compute-0 nova_compute[188777]: </domain>
Feb 19 20:36:24 compute-0 nova_compute[188777]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.381 188781 DEBUG nova.compute.manager [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Preparing to wait for external event network-vif-plugged-1989fec7-60a1-41e3-bd78-56a7bdfdad64 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.382 188781 DEBUG oslo_concurrency.lockutils [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Acquiring lock "3480b144-b674-41b9-bf18-e66e647fbe86-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.382 188781 DEBUG oslo_concurrency.lockutils [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Lock "3480b144-b674-41b9-bf18-e66e647fbe86-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.383 188781 DEBUG oslo_concurrency.lockutils [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Lock "3480b144-b674-41b9-bf18-e66e647fbe86-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.383 188781 DEBUG nova.virt.libvirt.vif [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-19T20:36:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1666570736',display_name='tempest-ServerAddressesTestJSON-server-1666570736',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1666570736',id=10,image_ref='17b9bce8-a91b-495d-ac33-cf63893413f9',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df02c0da56494e34a2e958e403cdd24b',ramdisk_id='',reservation_id='r-3ze12ez9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='17b9bce8-a91b-495d-ac33-cf63893413f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1073436659',owner_user_name='tempest-ServerAddressesTestJSON-1073436659-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-19T20:36:17Z,user_data=None,user_id='cafdfae88326444da09076b7e3156d58',uuid=3480b144-b674-41b9-bf18-e66e647fbe86,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1989fec7-60a1-41e3-bd78-56a7bdfdad64", "address": "fa:16:3e:f9:a3:57", "network": {"id": "580376f8-802f-4dbc-bd1f-d9322eb2ee71", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1533329539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df02c0da56494e34a2e958e403cdd24b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1989fec7-60", "ovs_interfaceid": "1989fec7-60a1-41e3-bd78-56a7bdfdad64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.384 188781 DEBUG nova.network.os_vif_util [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Converting VIF {"id": "1989fec7-60a1-41e3-bd78-56a7bdfdad64", "address": "fa:16:3e:f9:a3:57", "network": {"id": "580376f8-802f-4dbc-bd1f-d9322eb2ee71", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1533329539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df02c0da56494e34a2e958e403cdd24b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1989fec7-60", "ovs_interfaceid": "1989fec7-60a1-41e3-bd78-56a7bdfdad64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.385 188781 DEBUG nova.network.os_vif_util [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:a3:57,bridge_name='br-int',has_traffic_filtering=True,id=1989fec7-60a1-41e3-bd78-56a7bdfdad64,network=Network(580376f8-802f-4dbc-bd1f-d9322eb2ee71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1989fec7-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.385 188781 DEBUG os_vif [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:a3:57,bridge_name='br-int',has_traffic_filtering=True,id=1989fec7-60a1-41e3-bd78-56a7bdfdad64,network=Network(580376f8-802f-4dbc-bd1f-d9322eb2ee71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1989fec7-60') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.386 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.387 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.387 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.391 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.391 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1989fec7-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.392 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1989fec7-60, col_values=(('external_ids', {'iface-id': '1989fec7-60a1-41e3-bd78-56a7bdfdad64', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f9:a3:57', 'vm-uuid': '3480b144-b674-41b9-bf18-e66e647fbe86'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.394 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:24 compute-0 NetworkManager[57033]: <info>  [1771533384.3954] manager: (tap1989fec7-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.395 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.400 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.401 188781 INFO os_vif [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:a3:57,bridge_name='br-int',has_traffic_filtering=True,id=1989fec7-60a1-41e3-bd78-56a7bdfdad64,network=Network(580376f8-802f-4dbc-bd1f-d9322eb2ee71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1989fec7-60')
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.463 188781 DEBUG nova.virt.libvirt.driver [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.463 188781 DEBUG nova.virt.libvirt.driver [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.468 188781 DEBUG nova.virt.libvirt.driver [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] No VIF found with MAC fa:16:3e:f9:a3:57, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.468 188781 INFO nova.virt.libvirt.driver [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Using config drive
Feb 19 20:36:24 compute-0 podman[253229]: 2026-02-19 20:36:24.500809038 +0000 UTC m=+0.070676675 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.810 188781 DEBUG nova.network.neutron [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Successfully created port: 913d86d2-685f-4393-9143-efa6e9c6941a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.971 188781 DEBUG nova.network.neutron [req-abe89c16-4b0d-4290-8ac1-72dd1d691fcb req-0b9ee1e8-d5f5-477f-9a5b-6d0abfa83122 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Updated VIF entry in instance network info cache for port 44b4451c-db39-42a3-a2c6-5c8c42d1669b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.972 188781 DEBUG nova.network.neutron [req-abe89c16-4b0d-4290-8ac1-72dd1d691fcb req-0b9ee1e8-d5f5-477f-9a5b-6d0abfa83122 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Updating instance_info_cache with network_info: [{"id": "44b4451c-db39-42a3-a2c6-5c8c42d1669b", "address": "fa:16:3e:f7:60:ee", "network": {"id": "ef3fe901-c03c-42fd-97b9-c1f0218f248b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-572210270-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54ce0de2bf12421a9458013ccaa2dcad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44b4451c-db", "ovs_interfaceid": "44b4451c-db39-42a3-a2c6-5c8c42d1669b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:36:24 compute-0 nova_compute[188777]: 2026-02-19 20:36:24.993 188781 DEBUG oslo_concurrency.lockutils [req-abe89c16-4b0d-4290-8ac1-72dd1d691fcb req-0b9ee1e8-d5f5-477f-9a5b-6d0abfa83122 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Releasing lock "refresh_cache-997ebdcf-7eab-485b-8fbf-d21112c78946" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:36:25 compute-0 nova_compute[188777]: 2026-02-19 20:36:25.181 188781 INFO nova.virt.libvirt.driver [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Creating config drive at /var/lib/nova/instances/3480b144-b674-41b9-bf18-e66e647fbe86/disk.config
Feb 19 20:36:25 compute-0 nova_compute[188777]: 2026-02-19 20:36:25.186 188781 DEBUG oslo_concurrency.processutils [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3480b144-b674-41b9-bf18-e66e647fbe86/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpmzki4250 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:36:25 compute-0 nova_compute[188777]: 2026-02-19 20:36:25.306 188781 DEBUG oslo_concurrency.processutils [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3480b144-b674-41b9-bf18-e66e647fbe86/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpmzki4250" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:36:25 compute-0 nova_compute[188777]: 2026-02-19 20:36:25.309 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:25 compute-0 kernel: tap1989fec7-60: entered promiscuous mode
Feb 19 20:36:25 compute-0 NetworkManager[57033]: <info>  [1771533385.3515] manager: (tap1989fec7-60): new Tun device (/org/freedesktop/NetworkManager/Devices/52)
Feb 19 20:36:25 compute-0 ovn_controller[98843]: 2026-02-19T20:36:25Z|00101|binding|INFO|Claiming lport 1989fec7-60a1-41e3-bd78-56a7bdfdad64 for this chassis.
Feb 19 20:36:25 compute-0 ovn_controller[98843]: 2026-02-19T20:36:25Z|00102|binding|INFO|1989fec7-60a1-41e3-bd78-56a7bdfdad64: Claiming fa:16:3e:f9:a3:57 10.100.0.6
Feb 19 20:36:25 compute-0 nova_compute[188777]: 2026-02-19 20:36:25.354 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:25 compute-0 systemd-udevd[253139]: Network interface NamePolicy= disabled on kernel command line.
Feb 19 20:36:25 compute-0 nova_compute[188777]: 2026-02-19 20:36:25.359 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:25 compute-0 ovn_controller[98843]: 2026-02-19T20:36:25Z|00103|binding|INFO|Setting lport 1989fec7-60a1-41e3-bd78-56a7bdfdad64 ovn-installed in OVS
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:25.365 108175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:a3:57 10.100.0.6'], port_security=['fa:16:3e:f9:a3:57 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3480b144-b674-41b9-bf18-e66e647fbe86', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-580376f8-802f-4dbc-bd1f-d9322eb2ee71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df02c0da56494e34a2e958e403cdd24b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3d2eccda-2a88-439e-94ad-5b6196dcbf09', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=20ea14a6-132c-4469-8f24-d77696552e1a, chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>], logical_port=1989fec7-60a1-41e3-bd78-56a7bdfdad64) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 19 20:36:25 compute-0 ovn_controller[98843]: 2026-02-19T20:36:25Z|00104|binding|INFO|Setting lport 1989fec7-60a1-41e3-bd78-56a7bdfdad64 up in Southbound
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:25.367 108175 INFO neutron.agent.ovn.metadata.agent [-] Port 1989fec7-60a1-41e3-bd78-56a7bdfdad64 in datapath 580376f8-802f-4dbc-bd1f-d9322eb2ee71 bound to our chassis
Feb 19 20:36:25 compute-0 nova_compute[188777]: 2026-02-19 20:36:25.368 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:25 compute-0 NetworkManager[57033]: <info>  [1771533385.3715] device (tap1989fec7-60): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:25.369 108175 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 580376f8-802f-4dbc-bd1f-d9322eb2ee71
Feb 19 20:36:25 compute-0 NetworkManager[57033]: <info>  [1771533385.3747] device (tap1989fec7-60): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:25.381 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[13e0aa2d-2c9e-466b-89d1-d03c1c62b65b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:25.382 108175 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap580376f8-81 in ovnmeta-580376f8-802f-4dbc-bd1f-d9322eb2ee71 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:25.385 242160 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap580376f8-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:25.385 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[bb635335-a590-496b-9f15-aa88b82c0fe8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:25 compute-0 systemd-machined[158158]: New machine qemu-10-instance-0000000a.
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:25.386 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[ab01c93a-6954-41c6-ae3b-ea984edf7b10]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:25 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:25.401 108698 DEBUG oslo.privsep.daemon [-] privsep: reply[e6638711-1329-46ba-8cac-939298d29bea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:25.428 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[c7aba7a8-e1bd-4c97-8211-a458256a03df]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:25.447 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[1758a985-a3e2-4164-8b3e-b13fe731cebb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:25.458 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[89a05964-2ae5-44c7-b6d5-debe3acbe986]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:25 compute-0 NetworkManager[57033]: <info>  [1771533385.4598] manager: (tap580376f8-80): new Veth device (/org/freedesktop/NetworkManager/Devices/53)
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:25.480 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[52b47164-62bb-4079-b316-c6b6eef6e0d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:25.484 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[d55eff14-f4f2-4178-8aed-6351b6e914ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:25 compute-0 NetworkManager[57033]: <info>  [1771533385.5003] device (tap580376f8-80): carrier: link connected
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:25.503 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[7962280a-b94a-4a6a-8d1c-e15ca4e495e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:25.516 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[da999378-7b17-4ff1-9c41-9ec61f0cf2bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap580376f8-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:e1:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489696, 'reachable_time': 29478, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253284, 'error': None, 'target': 'ovnmeta-580376f8-802f-4dbc-bd1f-d9322eb2ee71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:25.525 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[4ae7b136-14a2-461b-a86d-e5ffe3b55e59]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4f:e193'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 489696, 'tstamp': 489696}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253285, 'error': None, 'target': 'ovnmeta-580376f8-802f-4dbc-bd1f-d9322eb2ee71', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:25.540 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[80244687-6d30-4796-ae7d-59c8826b5842]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap580376f8-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:e1:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489696, 'reachable_time': 29478, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253286, 'error': None, 'target': 'ovnmeta-580376f8-802f-4dbc-bd1f-d9322eb2ee71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:25.565 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[fb232a37-6b22-4db2-983e-ebaab293696b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:25.609 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[d55532f7-6778-444f-bc4f-bcbee16d02a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:25.611 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap580376f8-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:25.612 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:25.614 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap580376f8-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:36:25 compute-0 kernel: tap580376f8-80: entered promiscuous mode
Feb 19 20:36:25 compute-0 nova_compute[188777]: 2026-02-19 20:36:25.616 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:25 compute-0 NetworkManager[57033]: <info>  [1771533385.6189] manager: (tap580376f8-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:25.623 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap580376f8-80, col_values=(('external_ids', {'iface-id': 'ea075556-7da2-4125-89a9-8bfd052c684e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:36:25 compute-0 ovn_controller[98843]: 2026-02-19T20:36:25Z|00105|binding|INFO|Releasing lport ea075556-7da2-4125-89a9-8bfd052c684e from this chassis (sb_readonly=0)
Feb 19 20:36:25 compute-0 nova_compute[188777]: 2026-02-19 20:36:25.625 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:25.628 108175 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/580376f8-802f-4dbc-bd1f-d9322eb2ee71.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/580376f8-802f-4dbc-bd1f-d9322eb2ee71.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 19 20:36:25 compute-0 nova_compute[188777]: 2026-02-19 20:36:25.629 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:25.629 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[282bf6cf-7296-4dc3-8a6c-7c9ea79bf630]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:25.631 108175 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]: global
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]:     log         /dev/log local0 debug
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]:     log-tag     haproxy-metadata-proxy-580376f8-802f-4dbc-bd1f-d9322eb2ee71
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]:     user        root
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]:     group       root
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]:     maxconn     1024
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]:     pidfile     /var/lib/neutron/external/pids/580376f8-802f-4dbc-bd1f-d9322eb2ee71.pid.haproxy
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]:     daemon
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]: 
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]: defaults
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]:     log global
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]:     mode http
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]:     option httplog
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]:     option dontlognull
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]:     option http-server-close
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]:     option forwardfor
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]:     retries                 3
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]:     timeout http-request    30s
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]:     timeout connect         30s
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]:     timeout client          32s
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]:     timeout server          32s
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]:     timeout http-keep-alive 30s
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]: 
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]: 
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]: listen listener
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]:     bind 169.254.169.254:80
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]:     server metadata /var/lib/neutron/metadata_proxy
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]:     http-request add-header X-OVN-Network-ID 580376f8-802f-4dbc-bd1f-d9322eb2ee71
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 19 20:36:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:25.632 108175 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-580376f8-802f-4dbc-bd1f-d9322eb2ee71', 'env', 'PROCESS_TAG=haproxy-580376f8-802f-4dbc-bd1f-d9322eb2ee71', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/580376f8-802f-4dbc-bd1f-d9322eb2ee71.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 19 20:36:25 compute-0 nova_compute[188777]: 2026-02-19 20:36:25.669 188781 DEBUG nova.virt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Emitting event <LifecycleEvent: 1771533385.6692748, 3480b144-b674-41b9-bf18-e66e647fbe86 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:36:25 compute-0 nova_compute[188777]: 2026-02-19 20:36:25.670 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] VM Started (Lifecycle Event)
Feb 19 20:36:25 compute-0 nova_compute[188777]: 2026-02-19 20:36:25.691 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:36:25 compute-0 nova_compute[188777]: 2026-02-19 20:36:25.696 188781 DEBUG nova.virt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Emitting event <LifecycleEvent: 1771533385.6693678, 3480b144-b674-41b9-bf18-e66e647fbe86 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:36:25 compute-0 nova_compute[188777]: 2026-02-19 20:36:25.697 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] VM Paused (Lifecycle Event)
Feb 19 20:36:25 compute-0 nova_compute[188777]: 2026-02-19 20:36:25.720 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:36:25 compute-0 nova_compute[188777]: 2026-02-19 20:36:25.725 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 19 20:36:25 compute-0 nova_compute[188777]: 2026-02-19 20:36:25.749 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 19 20:36:26 compute-0 podman[253325]: 2026-02-19 20:36:25.981862585 +0000 UTC m=+0.037029485 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 19 20:36:26 compute-0 podman[253325]: 2026-02-19 20:36:26.237898007 +0000 UTC m=+0.293064867 container create 901a03cde0b9e1594c4150690c4ff7641d5c1197a12f8e018a730e37cfca33fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-580376f8-802f-4dbc-bd1f-d9322eb2ee71, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 19 20:36:26 compute-0 systemd[1]: Started libpod-conmon-901a03cde0b9e1594c4150690c4ff7641d5c1197a12f8e018a730e37cfca33fe.scope.
Feb 19 20:36:26 compute-0 systemd[1]: Started libcrun container.
Feb 19 20:36:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec304604edd55715d8e9383b695c6959dbeecaeea07923a8ed5cc4a27553d7d8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 19 20:36:26 compute-0 podman[253325]: 2026-02-19 20:36:26.477110303 +0000 UTC m=+0.532277153 container init 901a03cde0b9e1594c4150690c4ff7641d5c1197a12f8e018a730e37cfca33fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-580376f8-802f-4dbc-bd1f-d9322eb2ee71, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 19 20:36:26 compute-0 podman[253325]: 2026-02-19 20:36:26.485869246 +0000 UTC m=+0.541036106 container start 901a03cde0b9e1594c4150690c4ff7641d5c1197a12f8e018a730e37cfca33fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-580376f8-802f-4dbc-bd1f-d9322eb2ee71, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 20:36:26 compute-0 neutron-haproxy-ovnmeta-580376f8-802f-4dbc-bd1f-d9322eb2ee71[253339]: [NOTICE]   (253344) : New worker (253346) forked
Feb 19 20:36:26 compute-0 neutron-haproxy-ovnmeta-580376f8-802f-4dbc-bd1f-d9322eb2ee71[253339]: [NOTICE]   (253344) : Loading success.
Feb 19 20:36:26 compute-0 nova_compute[188777]: 2026-02-19 20:36:26.575 188781 DEBUG nova.network.neutron [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Successfully updated port: 913d86d2-685f-4393-9143-efa6e9c6941a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 19 20:36:26 compute-0 nova_compute[188777]: 2026-02-19 20:36:26.595 188781 DEBUG oslo_concurrency.lockutils [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Acquiring lock "refresh_cache-dff9d513-54f8-4d73-acf7-df610dc4d064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:36:26 compute-0 nova_compute[188777]: 2026-02-19 20:36:26.595 188781 DEBUG oslo_concurrency.lockutils [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Acquired lock "refresh_cache-dff9d513-54f8-4d73-acf7-df610dc4d064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:36:26 compute-0 nova_compute[188777]: 2026-02-19 20:36:26.608 188781 DEBUG nova.network.neutron [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 19 20:36:26 compute-0 nova_compute[188777]: 2026-02-19 20:36:26.983 188781 DEBUG nova.network.neutron [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 19 20:36:27 compute-0 nova_compute[188777]: 2026-02-19 20:36:27.059 188781 DEBUG nova.compute.manager [req-01e1f999-7952-4c29-89a7-609bcae592c9 req-b1ffcee4-2593-469f-9195-b71f3f8256e0 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Received event network-changed-913d86d2-685f-4393-9143-efa6e9c6941a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:36:27 compute-0 nova_compute[188777]: 2026-02-19 20:36:27.060 188781 DEBUG nova.compute.manager [req-01e1f999-7952-4c29-89a7-609bcae592c9 req-b1ffcee4-2593-469f-9195-b71f3f8256e0 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Refreshing instance network info cache due to event network-changed-913d86d2-685f-4393-9143-efa6e9c6941a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 19 20:36:27 compute-0 nova_compute[188777]: 2026-02-19 20:36:27.061 188781 DEBUG oslo_concurrency.lockutils [req-01e1f999-7952-4c29-89a7-609bcae592c9 req-b1ffcee4-2593-469f-9195-b71f3f8256e0 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "refresh_cache-dff9d513-54f8-4d73-acf7-df610dc4d064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:36:27 compute-0 nova_compute[188777]: 2026-02-19 20:36:27.349 188781 DEBUG nova.network.neutron [req-426d341e-5e91-408e-8ee4-14333a5c88bd req-f26c4e13-0f97-4d0c-846a-89c8e80988e5 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Updated VIF entry in instance network info cache for port 1989fec7-60a1-41e3-bd78-56a7bdfdad64. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 19 20:36:27 compute-0 nova_compute[188777]: 2026-02-19 20:36:27.349 188781 DEBUG nova.network.neutron [req-426d341e-5e91-408e-8ee4-14333a5c88bd req-f26c4e13-0f97-4d0c-846a-89c8e80988e5 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Updating instance_info_cache with network_info: [{"id": "1989fec7-60a1-41e3-bd78-56a7bdfdad64", "address": "fa:16:3e:f9:a3:57", "network": {"id": "580376f8-802f-4dbc-bd1f-d9322eb2ee71", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1533329539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df02c0da56494e34a2e958e403cdd24b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1989fec7-60", "ovs_interfaceid": "1989fec7-60a1-41e3-bd78-56a7bdfdad64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:36:27 compute-0 nova_compute[188777]: 2026-02-19 20:36:27.362 188781 DEBUG oslo_concurrency.lockutils [req-426d341e-5e91-408e-8ee4-14333a5c88bd req-f26c4e13-0f97-4d0c-846a-89c8e80988e5 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Releasing lock "refresh_cache-3480b144-b674-41b9-bf18-e66e647fbe86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:36:28 compute-0 nova_compute[188777]: 2026-02-19 20:36:28.747 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.306 188781 DEBUG nova.network.neutron [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Updating instance_info_cache with network_info: [{"id": "913d86d2-685f-4393-9143-efa6e9c6941a", "address": "fa:16:3e:c2:a8:ee", "network": {"id": "2194f0b2-0b56-4fa1-a2f7-0ec7651876c4", "bridge": "br-int", "label": "tempest-network-smoke--1477620676", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb9e3732b9f4456d9f90bf3e156f6f7c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap913d86d2-68", "ovs_interfaceid": "913d86d2-685f-4393-9143-efa6e9c6941a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.331 188781 DEBUG oslo_concurrency.lockutils [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Releasing lock "refresh_cache-dff9d513-54f8-4d73-acf7-df610dc4d064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.332 188781 DEBUG nova.compute.manager [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Instance network_info: |[{"id": "913d86d2-685f-4393-9143-efa6e9c6941a", "address": "fa:16:3e:c2:a8:ee", "network": {"id": "2194f0b2-0b56-4fa1-a2f7-0ec7651876c4", "bridge": "br-int", "label": "tempest-network-smoke--1477620676", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb9e3732b9f4456d9f90bf3e156f6f7c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap913d86d2-68", "ovs_interfaceid": "913d86d2-685f-4393-9143-efa6e9c6941a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.332 188781 DEBUG oslo_concurrency.lockutils [req-01e1f999-7952-4c29-89a7-609bcae592c9 req-b1ffcee4-2593-469f-9195-b71f3f8256e0 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquired lock "refresh_cache-dff9d513-54f8-4d73-acf7-df610dc4d064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.333 188781 DEBUG nova.network.neutron [req-01e1f999-7952-4c29-89a7-609bcae592c9 req-b1ffcee4-2593-469f-9195-b71f3f8256e0 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Refreshing network info cache for port 913d86d2-685f-4393-9143-efa6e9c6941a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.338 188781 DEBUG nova.virt.libvirt.driver [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Start _get_guest_xml network_info=[{"id": "913d86d2-685f-4393-9143-efa6e9c6941a", "address": "fa:16:3e:c2:a8:ee", "network": {"id": "2194f0b2-0b56-4fa1-a2f7-0ec7651876c4", "bridge": "br-int", "label": "tempest-network-smoke--1477620676", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb9e3732b9f4456d9f90bf3e156f6f7c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap913d86d2-68", "ovs_interfaceid": "913d86d2-685f-4393-9143-efa6e9c6941a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-19T20:34:24Z,direct_url=<?>,disk_format='qcow2',id=17b9bce8-a91b-495d-ac33-cf63893413f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='59f01dee51a74ac1a9f82733f591827d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-19T20:34:25Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'encryption_secret_uuid': None, 'image_id': '17b9bce8-a91b-495d-ac33-cf63893413f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.349 188781 WARNING nova.virt.libvirt.driver [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.365 188781 DEBUG nova.virt.libvirt.host [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.366 188781 DEBUG nova.virt.libvirt.host [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.373 188781 DEBUG nova.virt.libvirt.host [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.373 188781 DEBUG nova.virt.libvirt.host [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.374 188781 DEBUG nova.virt.libvirt.driver [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.374 188781 DEBUG nova.virt.hardware [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-19T20:34:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68c4e072-7c2b-48a1-8e07-0fd69e153270',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-19T20:34:24Z,direct_url=<?>,disk_format='qcow2',id=17b9bce8-a91b-495d-ac33-cf63893413f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='59f01dee51a74ac1a9f82733f591827d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-19T20:34:25Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.375 188781 DEBUG nova.virt.hardware [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.376 188781 DEBUG nova.virt.hardware [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.376 188781 DEBUG nova.virt.hardware [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.377 188781 DEBUG nova.virt.hardware [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.377 188781 DEBUG nova.virt.hardware [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.378 188781 DEBUG nova.virt.hardware [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.378 188781 DEBUG nova.virt.hardware [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.379 188781 DEBUG nova.virt.hardware [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.379 188781 DEBUG nova.virt.hardware [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.380 188781 DEBUG nova.virt.hardware [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.386 188781 DEBUG nova.virt.libvirt.vif [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-19T20:36:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-215985627',display_name='tempest-TestNetworkBasicOps-server-215985627',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-215985627',id=11,image_ref='17b9bce8-a91b-495d-ac33-cf63893413f9',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIsYJD9ox4HWzETHNgZ/F46rGv2rzfGJKbeGBRYy1qHGeMQ+xiWXMf/Ju2RdchMTiWO5H1IvvY6dNMlS05h/zOwSw9Z98RXNdErkyi59XIAxQQcVrSMHH701SXsB/uaKmw==',key_name='tempest-TestNetworkBasicOps-1830742569',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eb9e3732b9f4456d9f90bf3e156f6f7c',ramdisk_id='',reservation_id='r-np5bnsze',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='17b9bce8-a91b-495d-ac33-cf63893413f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1215127919',owner_user_name='tempest-TestNetworkBasicOps-1215127919-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-19T20:36:21Z,user_data=None,user_id='ef20d0162e404953a8f45beac9fadf18',uuid=dff9d513-54f8-4d73-acf7-df610dc4d064,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "913d86d2-685f-4393-9143-efa6e9c6941a", "address": "fa:16:3e:c2:a8:ee", "network": {"id": "2194f0b2-0b56-4fa1-a2f7-0ec7651876c4", "bridge": "br-int", "label": "tempest-network-smoke--1477620676", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb9e3732b9f4456d9f90bf3e156f6f7c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap913d86d2-68", "ovs_interfaceid": "913d86d2-685f-4393-9143-efa6e9c6941a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.386 188781 DEBUG nova.network.os_vif_util [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Converting VIF {"id": "913d86d2-685f-4393-9143-efa6e9c6941a", "address": "fa:16:3e:c2:a8:ee", "network": {"id": "2194f0b2-0b56-4fa1-a2f7-0ec7651876c4", "bridge": "br-int", "label": "tempest-network-smoke--1477620676", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb9e3732b9f4456d9f90bf3e156f6f7c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap913d86d2-68", "ovs_interfaceid": "913d86d2-685f-4393-9143-efa6e9c6941a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.387 188781 DEBUG nova.network.os_vif_util [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:a8:ee,bridge_name='br-int',has_traffic_filtering=True,id=913d86d2-685f-4393-9143-efa6e9c6941a,network=Network(2194f0b2-0b56-4fa1-a2f7-0ec7651876c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap913d86d2-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.389 188781 DEBUG nova.objects.instance [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Lazy-loading 'pci_devices' on Instance uuid dff9d513-54f8-4d73-acf7-df610dc4d064 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.394 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.406 188781 DEBUG nova.virt.libvirt.driver [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] End _get_guest_xml xml=<domain type="kvm">
Feb 19 20:36:29 compute-0 nova_compute[188777]:   <uuid>dff9d513-54f8-4d73-acf7-df610dc4d064</uuid>
Feb 19 20:36:29 compute-0 nova_compute[188777]:   <name>instance-0000000b</name>
Feb 19 20:36:29 compute-0 nova_compute[188777]:   <memory>131072</memory>
Feb 19 20:36:29 compute-0 nova_compute[188777]:   <vcpu>1</vcpu>
Feb 19 20:36:29 compute-0 nova_compute[188777]:   <metadata>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 19 20:36:29 compute-0 nova_compute[188777]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:       <nova:name>tempest-TestNetworkBasicOps-server-215985627</nova:name>
Feb 19 20:36:29 compute-0 nova_compute[188777]:       <nova:creationTime>2026-02-19 20:36:29</nova:creationTime>
Feb 19 20:36:29 compute-0 nova_compute[188777]:       <nova:flavor name="m1.nano">
Feb 19 20:36:29 compute-0 nova_compute[188777]:         <nova:memory>128</nova:memory>
Feb 19 20:36:29 compute-0 nova_compute[188777]:         <nova:disk>1</nova:disk>
Feb 19 20:36:29 compute-0 nova_compute[188777]:         <nova:swap>0</nova:swap>
Feb 19 20:36:29 compute-0 nova_compute[188777]:         <nova:ephemeral>0</nova:ephemeral>
Feb 19 20:36:29 compute-0 nova_compute[188777]:         <nova:vcpus>1</nova:vcpus>
Feb 19 20:36:29 compute-0 nova_compute[188777]:       </nova:flavor>
Feb 19 20:36:29 compute-0 nova_compute[188777]:       <nova:owner>
Feb 19 20:36:29 compute-0 nova_compute[188777]:         <nova:user uuid="ef20d0162e404953a8f45beac9fadf18">tempest-TestNetworkBasicOps-1215127919-project-member</nova:user>
Feb 19 20:36:29 compute-0 nova_compute[188777]:         <nova:project uuid="eb9e3732b9f4456d9f90bf3e156f6f7c">tempest-TestNetworkBasicOps-1215127919</nova:project>
Feb 19 20:36:29 compute-0 nova_compute[188777]:       </nova:owner>
Feb 19 20:36:29 compute-0 nova_compute[188777]:       <nova:root type="image" uuid="17b9bce8-a91b-495d-ac33-cf63893413f9"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:       <nova:ports>
Feb 19 20:36:29 compute-0 nova_compute[188777]:         <nova:port uuid="913d86d2-685f-4393-9143-efa6e9c6941a">
Feb 19 20:36:29 compute-0 nova_compute[188777]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:         </nova:port>
Feb 19 20:36:29 compute-0 nova_compute[188777]:       </nova:ports>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     </nova:instance>
Feb 19 20:36:29 compute-0 nova_compute[188777]:   </metadata>
Feb 19 20:36:29 compute-0 nova_compute[188777]:   <sysinfo type="smbios">
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <system>
Feb 19 20:36:29 compute-0 nova_compute[188777]:       <entry name="manufacturer">RDO</entry>
Feb 19 20:36:29 compute-0 nova_compute[188777]:       <entry name="product">OpenStack Compute</entry>
Feb 19 20:36:29 compute-0 nova_compute[188777]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 19 20:36:29 compute-0 nova_compute[188777]:       <entry name="serial">dff9d513-54f8-4d73-acf7-df610dc4d064</entry>
Feb 19 20:36:29 compute-0 nova_compute[188777]:       <entry name="uuid">dff9d513-54f8-4d73-acf7-df610dc4d064</entry>
Feb 19 20:36:29 compute-0 nova_compute[188777]:       <entry name="family">Virtual Machine</entry>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     </system>
Feb 19 20:36:29 compute-0 nova_compute[188777]:   </sysinfo>
Feb 19 20:36:29 compute-0 nova_compute[188777]:   <os>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <boot dev="hd"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <smbios mode="sysinfo"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:   </os>
Feb 19 20:36:29 compute-0 nova_compute[188777]:   <features>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <acpi/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <apic/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <vmcoreinfo/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:   </features>
Feb 19 20:36:29 compute-0 nova_compute[188777]:   <clock offset="utc">
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <timer name="pit" tickpolicy="delay"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <timer name="hpet" present="no"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:   </clock>
Feb 19 20:36:29 compute-0 nova_compute[188777]:   <cpu mode="host-model" match="exact">
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <topology sockets="1" cores="1" threads="1"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:   </cpu>
Feb 19 20:36:29 compute-0 nova_compute[188777]:   <devices>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <disk type="file" device="disk">
Feb 19 20:36:29 compute-0 nova_compute[188777]:       <driver name="qemu" type="qcow2" cache="none"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:       <source file="/var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:       <target dev="vda" bus="virtio"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     </disk>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <disk type="file" device="cdrom">
Feb 19 20:36:29 compute-0 nova_compute[188777]:       <driver name="qemu" type="raw" cache="none"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:       <source file="/var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk.config"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:       <target dev="sda" bus="sata"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     </disk>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <interface type="ethernet">
Feb 19 20:36:29 compute-0 nova_compute[188777]:       <mac address="fa:16:3e:c2:a8:ee"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:       <model type="virtio"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:       <driver name="vhost" rx_queue_size="512"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:       <mtu size="1442"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:       <target dev="tap913d86d2-68"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     </interface>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <serial type="pty">
Feb 19 20:36:29 compute-0 nova_compute[188777]:       <log file="/var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/console.log" append="off"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     </serial>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <video>
Feb 19 20:36:29 compute-0 nova_compute[188777]:       <model type="virtio"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     </video>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <input type="tablet" bus="usb"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <rng model="virtio">
Feb 19 20:36:29 compute-0 nova_compute[188777]:       <backend model="random">/dev/urandom</backend>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     </rng>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <controller type="usb" index="0"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     <memballoon model="virtio">
Feb 19 20:36:29 compute-0 nova_compute[188777]:       <stats period="10"/>
Feb 19 20:36:29 compute-0 nova_compute[188777]:     </memballoon>
Feb 19 20:36:29 compute-0 nova_compute[188777]:   </devices>
Feb 19 20:36:29 compute-0 nova_compute[188777]: </domain>
Feb 19 20:36:29 compute-0 nova_compute[188777]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.407 188781 DEBUG nova.compute.manager [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Preparing to wait for external event network-vif-plugged-913d86d2-685f-4393-9143-efa6e9c6941a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.407 188781 DEBUG oslo_concurrency.lockutils [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Acquiring lock "dff9d513-54f8-4d73-acf7-df610dc4d064-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.408 188781 DEBUG oslo_concurrency.lockutils [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Lock "dff9d513-54f8-4d73-acf7-df610dc4d064-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.408 188781 DEBUG oslo_concurrency.lockutils [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Lock "dff9d513-54f8-4d73-acf7-df610dc4d064-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.409 188781 DEBUG nova.virt.libvirt.vif [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-19T20:36:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-215985627',display_name='tempest-TestNetworkBasicOps-server-215985627',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-215985627',id=11,image_ref='17b9bce8-a91b-495d-ac33-cf63893413f9',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIsYJD9ox4HWzETHNgZ/F46rGv2rzfGJKbeGBRYy1qHGeMQ+xiWXMf/Ju2RdchMTiWO5H1IvvY6dNMlS05h/zOwSw9Z98RXNdErkyi59XIAxQQcVrSMHH701SXsB/uaKmw==',key_name='tempest-TestNetworkBasicOps-1830742569',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eb9e3732b9f4456d9f90bf3e156f6f7c',ramdisk_id='',reservation_id='r-np5bnsze',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='17b9bce8-a91b-495d-ac33-cf63893413f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1215127919',owner_user_name='tempest-TestNetworkBasicOps-1215127919-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-19T20:36:21Z,user_data=None,user_id='ef20d0162e404953a8f45beac9fadf18',uuid=dff9d513-54f8-4d73-acf7-df610dc4d064,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "913d86d2-685f-4393-9143-efa6e9c6941a", "address": "fa:16:3e:c2:a8:ee", "network": {"id": "2194f0b2-0b56-4fa1-a2f7-0ec7651876c4", "bridge": "br-int", "label": "tempest-network-smoke--1477620676", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb9e3732b9f4456d9f90bf3e156f6f7c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap913d86d2-68", "ovs_interfaceid": "913d86d2-685f-4393-9143-efa6e9c6941a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.409 188781 DEBUG nova.network.os_vif_util [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Converting VIF {"id": "913d86d2-685f-4393-9143-efa6e9c6941a", "address": "fa:16:3e:c2:a8:ee", "network": {"id": "2194f0b2-0b56-4fa1-a2f7-0ec7651876c4", "bridge": "br-int", "label": "tempest-network-smoke--1477620676", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb9e3732b9f4456d9f90bf3e156f6f7c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap913d86d2-68", "ovs_interfaceid": "913d86d2-685f-4393-9143-efa6e9c6941a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.410 188781 DEBUG nova.network.os_vif_util [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:a8:ee,bridge_name='br-int',has_traffic_filtering=True,id=913d86d2-685f-4393-9143-efa6e9c6941a,network=Network(2194f0b2-0b56-4fa1-a2f7-0ec7651876c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap913d86d2-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.410 188781 DEBUG os_vif [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:a8:ee,bridge_name='br-int',has_traffic_filtering=True,id=913d86d2-685f-4393-9143-efa6e9c6941a,network=Network(2194f0b2-0b56-4fa1-a2f7-0ec7651876c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap913d86d2-68') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.411 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.411 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.411 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.415 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.416 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap913d86d2-68, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.416 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap913d86d2-68, col_values=(('external_ids', {'iface-id': '913d86d2-685f-4393-9143-efa6e9c6941a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c2:a8:ee', 'vm-uuid': 'dff9d513-54f8-4d73-acf7-df610dc4d064'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.418 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:29 compute-0 NetworkManager[57033]: <info>  [1771533389.4197] manager: (tap913d86d2-68): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.421 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.424 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.425 188781 INFO os_vif [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:a8:ee,bridge_name='br-int',has_traffic_filtering=True,id=913d86d2-685f-4393-9143-efa6e9c6941a,network=Network(2194f0b2-0b56-4fa1-a2f7-0ec7651876c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap913d86d2-68')
Feb 19 20:36:29 compute-0 podman[253355]: 2026-02-19 20:36:29.431796358 +0000 UTC m=+0.111897569 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, version=9.4, maintainer=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_id=kepler, distribution-scope=public, release-0.7.12=, io.buildah.version=1.29.0, release=1214.1726694543, vcs-type=git, architecture=x86_64, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Feb 19 20:36:29 compute-0 podman[253356]: 2026-02-19 20:36:29.463719433 +0000 UTC m=+0.137055004 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi)
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.482 188781 DEBUG nova.virt.libvirt.driver [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.482 188781 DEBUG nova.virt.libvirt.driver [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.482 188781 DEBUG nova.virt.libvirt.driver [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] No VIF found with MAC fa:16:3e:c2:a8:ee, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 19 20:36:29 compute-0 nova_compute[188777]: 2026-02-19 20:36:29.483 188781 INFO nova.virt.libvirt.driver [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Using config drive
Feb 19 20:36:29 compute-0 podman[204724]: time="2026-02-19T20:36:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:36:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:36:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31705 "" "Go-http-client/1.1"
Feb 19 20:36:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:36:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5299 "" "Go-http-client/1.1"
Feb 19 20:36:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:30.453 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:36:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:30.454 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:36:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:30.455 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:36:30 compute-0 nova_compute[188777]: 2026-02-19 20:36:30.977 188781 INFO nova.virt.libvirt.driver [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Creating config drive at /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk.config
Feb 19 20:36:30 compute-0 nova_compute[188777]: 2026-02-19 20:36:30.983 188781 DEBUG oslo_concurrency.processutils [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpyr16we4v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:36:31 compute-0 nova_compute[188777]: 2026-02-19 20:36:31.104 188781 DEBUG oslo_concurrency.processutils [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpyr16we4v" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:36:31 compute-0 kernel: tap913d86d2-68: entered promiscuous mode
Feb 19 20:36:31 compute-0 NetworkManager[57033]: <info>  [1771533391.1674] manager: (tap913d86d2-68): new Tun device (/org/freedesktop/NetworkManager/Devices/56)
Feb 19 20:36:31 compute-0 nova_compute[188777]: 2026-02-19 20:36:31.168 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:31 compute-0 ovn_controller[98843]: 2026-02-19T20:36:31Z|00106|binding|INFO|Claiming lport 913d86d2-685f-4393-9143-efa6e9c6941a for this chassis.
Feb 19 20:36:31 compute-0 ovn_controller[98843]: 2026-02-19T20:36:31Z|00107|binding|INFO|913d86d2-685f-4393-9143-efa6e9c6941a: Claiming fa:16:3e:c2:a8:ee 10.100.0.4
Feb 19 20:36:31 compute-0 nova_compute[188777]: 2026-02-19 20:36:31.173 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:31 compute-0 nova_compute[188777]: 2026-02-19 20:36:31.185 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:31.194 108175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:a8:ee 10.100.0.4'], port_security=['fa:16:3e:c2:a8:ee 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'dff9d513-54f8-4d73-acf7-df610dc4d064', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2194f0b2-0b56-4fa1-a2f7-0ec7651876c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eb9e3732b9f4456d9f90bf3e156f6f7c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bacb7301-4bd1-4671-ad4e-1b734366df1e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=711aa7eb-e667-4403-bf74-bf12c5083da8, chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>], logical_port=913d86d2-685f-4393-9143-efa6e9c6941a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 19 20:36:31 compute-0 ovn_controller[98843]: 2026-02-19T20:36:31Z|00108|binding|INFO|Setting lport 913d86d2-685f-4393-9143-efa6e9c6941a ovn-installed in OVS
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:31.195 108175 INFO neutron.agent.ovn.metadata.agent [-] Port 913d86d2-685f-4393-9143-efa6e9c6941a in datapath 2194f0b2-0b56-4fa1-a2f7-0ec7651876c4 bound to our chassis
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:31.198 108175 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2194f0b2-0b56-4fa1-a2f7-0ec7651876c4
Feb 19 20:36:31 compute-0 nova_compute[188777]: 2026-02-19 20:36:31.197 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:31 compute-0 ovn_controller[98843]: 2026-02-19T20:36:31Z|00109|binding|INFO|Setting lport 913d86d2-685f-4393-9143-efa6e9c6941a up in Southbound
Feb 19 20:36:31 compute-0 systemd-machined[158158]: New machine qemu-11-instance-0000000b.
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:31.207 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[06c36fba-8dac-4087-adad-7b4a935fe250]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:31.208 108175 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2194f0b2-01 in ovnmeta-2194f0b2-0b56-4fa1-a2f7-0ec7651876c4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:31.210 242160 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2194f0b2-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:31.210 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[9c193b2e-7112-4cd0-9d37-e3e107ab50e6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:31.211 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[4488a2bd-91f1-4888-a901-dd8ce300d024]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:31 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-0000000b.
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:31.223 108698 DEBUG oslo.privsep.daemon [-] privsep: reply[96e3d6cb-31f7-4a34-a289-9d644c1000d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:31 compute-0 systemd-udevd[253416]: Network interface NamePolicy= disabled on kernel command line.
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:31.244 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[0d3b8435-6474-437f-b6a8-f570cb6f1d67]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:31 compute-0 NetworkManager[57033]: <info>  [1771533391.2576] device (tap913d86d2-68): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 19 20:36:31 compute-0 NetworkManager[57033]: <info>  [1771533391.2583] device (tap913d86d2-68): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:31.276 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[8ed04c5f-a381-4d9d-a10b-26e2ce4fb4bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:31.285 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[a77b4197-c554-4634-90f5-3a21414d7c20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:31 compute-0 NetworkManager[57033]: <info>  [1771533391.2865] manager: (tap2194f0b2-00): new Veth device (/org/freedesktop/NetworkManager/Devices/57)
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:31.322 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[9e4b27f9-267f-40d7-bf3b-440f811dacba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:31.326 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[a93ed983-5325-4159-ab8f-cc75f1da805a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:31 compute-0 NetworkManager[57033]: <info>  [1771533391.3519] device (tap2194f0b2-00): carrier: link connected
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:31.361 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[34fc98e0-a527-4222-965d-7e1d1a4e7eef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:31.377 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[abf3bf4d-4be8-49fd-b924-81788ee2a57c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2194f0b2-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:34:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490281, 'reachable_time': 26430, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253446, 'error': None, 'target': 'ovnmeta-2194f0b2-0b56-4fa1-a2f7-0ec7651876c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:31.392 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[1ef73318-8ecd-44ca-b73b-ab46f7f5ba11]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee4:3425'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490281, 'tstamp': 490281}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253447, 'error': None, 'target': 'ovnmeta-2194f0b2-0b56-4fa1-a2f7-0ec7651876c4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:31.404 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[a6c83773-e80f-4f46-9d23-bcfb64f3d71c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2194f0b2-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:34:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490281, 'reachable_time': 26430, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253448, 'error': None, 'target': 'ovnmeta-2194f0b2-0b56-4fa1-a2f7-0ec7651876c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:31 compute-0 openstack_network_exporter[207898]: ERROR   20:36:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:36:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:36:31 compute-0 openstack_network_exporter[207898]: ERROR   20:36:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:36:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:31.455 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[b48931d8-c98e-4e77-8dfc-b49a536eee1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:31.496 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[9633a836-ab08-452e-9d81-46937694b09c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:31.498 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2194f0b2-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:31.498 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:31.501 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2194f0b2-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:36:31 compute-0 kernel: tap2194f0b2-00: entered promiscuous mode
Feb 19 20:36:31 compute-0 NetworkManager[57033]: <info>  [1771533391.5039] manager: (tap2194f0b2-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Feb 19 20:36:31 compute-0 nova_compute[188777]: 2026-02-19 20:36:31.506 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:31.507 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2194f0b2-00, col_values=(('external_ids', {'iface-id': 'a514a3b0-3622-43cb-93f5-1ce2f2eacb84'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:36:31 compute-0 ovn_controller[98843]: 2026-02-19T20:36:31Z|00110|binding|INFO|Releasing lport a514a3b0-3622-43cb-93f5-1ce2f2eacb84 from this chassis (sb_readonly=0)
Feb 19 20:36:31 compute-0 nova_compute[188777]: 2026-02-19 20:36:31.509 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:31.510 108175 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2194f0b2-0b56-4fa1-a2f7-0ec7651876c4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2194f0b2-0b56-4fa1-a2f7-0ec7651876c4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 19 20:36:31 compute-0 nova_compute[188777]: 2026-02-19 20:36:31.510 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:31.511 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[e9238acd-8117-405a-af70-9494b37318b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:31.512 108175 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]: global
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]:     log         /dev/log local0 debug
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]:     log-tag     haproxy-metadata-proxy-2194f0b2-0b56-4fa1-a2f7-0ec7651876c4
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]:     user        root
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]:     group       root
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]:     maxconn     1024
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]:     pidfile     /var/lib/neutron/external/pids/2194f0b2-0b56-4fa1-a2f7-0ec7651876c4.pid.haproxy
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]:     daemon
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]: 
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]: defaults
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]:     log global
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]:     mode http
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]:     option httplog
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]:     option dontlognull
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]:     option http-server-close
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]:     option forwardfor
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]:     retries                 3
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]:     timeout http-request    30s
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]:     timeout connect         30s
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]:     timeout client          32s
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]:     timeout server          32s
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]:     timeout http-keep-alive 30s
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]: 
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]: 
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]: listen listener
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]:     bind 169.254.169.254:80
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]:     server metadata /var/lib/neutron/metadata_proxy
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]:     http-request add-header X-OVN-Network-ID 2194f0b2-0b56-4fa1-a2f7-0ec7651876c4
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 19 20:36:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:31.513 108175 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2194f0b2-0b56-4fa1-a2f7-0ec7651876c4', 'env', 'PROCESS_TAG=haproxy-2194f0b2-0b56-4fa1-a2f7-0ec7651876c4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2194f0b2-0b56-4fa1-a2f7-0ec7651876c4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 19 20:36:31 compute-0 nova_compute[188777]: 2026-02-19 20:36:31.514 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:31 compute-0 nova_compute[188777]: 2026-02-19 20:36:31.555 188781 DEBUG nova.compute.manager [req-b5422faf-d134-414e-b21a-aa345fca65db req-c8ba1526-a0b4-4923-a3a1-28d40c2b3758 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Received event network-vif-plugged-44b4451c-db39-42a3-a2c6-5c8c42d1669b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:36:31 compute-0 nova_compute[188777]: 2026-02-19 20:36:31.556 188781 DEBUG oslo_concurrency.lockutils [req-b5422faf-d134-414e-b21a-aa345fca65db req-c8ba1526-a0b4-4923-a3a1-28d40c2b3758 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "997ebdcf-7eab-485b-8fbf-d21112c78946-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:36:31 compute-0 nova_compute[188777]: 2026-02-19 20:36:31.556 188781 DEBUG oslo_concurrency.lockutils [req-b5422faf-d134-414e-b21a-aa345fca65db req-c8ba1526-a0b4-4923-a3a1-28d40c2b3758 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "997ebdcf-7eab-485b-8fbf-d21112c78946-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:36:31 compute-0 nova_compute[188777]: 2026-02-19 20:36:31.557 188781 DEBUG oslo_concurrency.lockutils [req-b5422faf-d134-414e-b21a-aa345fca65db req-c8ba1526-a0b4-4923-a3a1-28d40c2b3758 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "997ebdcf-7eab-485b-8fbf-d21112c78946-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:36:31 compute-0 nova_compute[188777]: 2026-02-19 20:36:31.557 188781 DEBUG nova.compute.manager [req-b5422faf-d134-414e-b21a-aa345fca65db req-c8ba1526-a0b4-4923-a3a1-28d40c2b3758 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Processing event network-vif-plugged-44b4451c-db39-42a3-a2c6-5c8c42d1669b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 19 20:36:31 compute-0 nova_compute[188777]: 2026-02-19 20:36:31.561 188781 DEBUG nova.virt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Emitting event <LifecycleEvent: 1771533391.555848, dff9d513-54f8-4d73-acf7-df610dc4d064 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:36:31 compute-0 nova_compute[188777]: 2026-02-19 20:36:31.561 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] VM Started (Lifecycle Event)
Feb 19 20:36:31 compute-0 nova_compute[188777]: 2026-02-19 20:36:31.563 188781 DEBUG nova.compute.manager [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Instance event wait completed in 8 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 19 20:36:31 compute-0 nova_compute[188777]: 2026-02-19 20:36:31.569 188781 DEBUG nova.virt.libvirt.driver [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 19 20:36:31 compute-0 nova_compute[188777]: 2026-02-19 20:36:31.576 188781 INFO nova.virt.libvirt.driver [-] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Instance spawned successfully.
Feb 19 20:36:31 compute-0 nova_compute[188777]: 2026-02-19 20:36:31.577 188781 DEBUG nova.virt.libvirt.driver [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 19 20:36:31 compute-0 nova_compute[188777]: 2026-02-19 20:36:31.607 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:36:31 compute-0 nova_compute[188777]: 2026-02-19 20:36:31.614 188781 DEBUG nova.virt.libvirt.driver [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:36:31 compute-0 nova_compute[188777]: 2026-02-19 20:36:31.615 188781 DEBUG nova.virt.libvirt.driver [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:36:31 compute-0 nova_compute[188777]: 2026-02-19 20:36:31.615 188781 DEBUG nova.virt.libvirt.driver [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:36:31 compute-0 nova_compute[188777]: 2026-02-19 20:36:31.616 188781 DEBUG nova.virt.libvirt.driver [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:36:31 compute-0 nova_compute[188777]: 2026-02-19 20:36:31.617 188781 DEBUG nova.virt.libvirt.driver [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:36:31 compute-0 nova_compute[188777]: 2026-02-19 20:36:31.617 188781 DEBUG nova.virt.libvirt.driver [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:36:31 compute-0 nova_compute[188777]: 2026-02-19 20:36:31.623 188781 DEBUG nova.virt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Emitting event <LifecycleEvent: 1771533391.5559924, dff9d513-54f8-4d73-acf7-df610dc4d064 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:36:31 compute-0 nova_compute[188777]: 2026-02-19 20:36:31.623 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] VM Paused (Lifecycle Event)
Feb 19 20:36:31 compute-0 nova_compute[188777]: 2026-02-19 20:36:31.658 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:36:31 compute-0 nova_compute[188777]: 2026-02-19 20:36:31.663 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 19 20:36:31 compute-0 nova_compute[188777]: 2026-02-19 20:36:31.685 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 19 20:36:31 compute-0 nova_compute[188777]: 2026-02-19 20:36:31.685 188781 DEBUG nova.virt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Emitting event <LifecycleEvent: 1771533391.5682325, 997ebdcf-7eab-485b-8fbf-d21112c78946 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:36:31 compute-0 nova_compute[188777]: 2026-02-19 20:36:31.686 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] VM Resumed (Lifecycle Event)
Feb 19 20:36:31 compute-0 nova_compute[188777]: 2026-02-19 20:36:31.697 188781 INFO nova.compute.manager [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Took 14.51 seconds to spawn the instance on the hypervisor.
Feb 19 20:36:31 compute-0 nova_compute[188777]: 2026-02-19 20:36:31.697 188781 DEBUG nova.compute.manager [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:36:31 compute-0 nova_compute[188777]: 2026-02-19 20:36:31.720 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:36:31 compute-0 nova_compute[188777]: 2026-02-19 20:36:31.725 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 19 20:36:31 compute-0 nova_compute[188777]: 2026-02-19 20:36:31.750 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 19 20:36:31 compute-0 nova_compute[188777]: 2026-02-19 20:36:31.766 188781 INFO nova.compute.manager [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Took 15.07 seconds to build instance.
Feb 19 20:36:31 compute-0 nova_compute[188777]: 2026-02-19 20:36:31.789 188781 DEBUG oslo_concurrency.lockutils [None req-be589926-3d32-4786-9594-66b31792c364 90c9e30d17534357bece36d1acaab39c 54ce0de2bf12421a9458013ccaa2dcad - - default default] Lock "997ebdcf-7eab-485b-8fbf-d21112c78946" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.198s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:36:31 compute-0 podman[253485]: 2026-02-19 20:36:31.868862677 +0000 UTC m=+0.046427158 container create c1963fb2b2f278b038c49cce0c8efab48457592008ca67a16b9357968c38d5af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2194f0b2-0b56-4fa1-a2f7-0ec7651876c4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 19 20:36:31 compute-0 systemd[1]: Started libpod-conmon-c1963fb2b2f278b038c49cce0c8efab48457592008ca67a16b9357968c38d5af.scope.
Feb 19 20:36:31 compute-0 podman[253485]: 2026-02-19 20:36:31.846946544 +0000 UTC m=+0.024511045 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 19 20:36:31 compute-0 systemd[1]: Started libcrun container.
Feb 19 20:36:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e23c319ddacd0619762f5d3bd2abf8a5b4c16c531696e5fd5bda4de95a8a6dc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 19 20:36:31 compute-0 podman[253485]: 2026-02-19 20:36:31.97609468 +0000 UTC m=+0.153659161 container init c1963fb2b2f278b038c49cce0c8efab48457592008ca67a16b9357968c38d5af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2194f0b2-0b56-4fa1-a2f7-0ec7651876c4, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0)
Feb 19 20:36:31 compute-0 podman[253485]: 2026-02-19 20:36:31.990147878 +0000 UTC m=+0.167712369 container start c1963fb2b2f278b038c49cce0c8efab48457592008ca67a16b9357968c38d5af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2194f0b2-0b56-4fa1-a2f7-0ec7651876c4, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:36:32 compute-0 neutron-haproxy-ovnmeta-2194f0b2-0b56-4fa1-a2f7-0ec7651876c4[253501]: [NOTICE]   (253505) : New worker (253507) forked
Feb 19 20:36:32 compute-0 neutron-haproxy-ovnmeta-2194f0b2-0b56-4fa1-a2f7-0ec7651876c4[253501]: [NOTICE]   (253505) : Loading success.
Feb 19 20:36:32 compute-0 nova_compute[188777]: 2026-02-19 20:36:32.478 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:32 compute-0 nova_compute[188777]: 2026-02-19 20:36:32.784 188781 DEBUG nova.network.neutron [req-01e1f999-7952-4c29-89a7-609bcae592c9 req-b1ffcee4-2593-469f-9195-b71f3f8256e0 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Updated VIF entry in instance network info cache for port 913d86d2-685f-4393-9143-efa6e9c6941a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 19 20:36:32 compute-0 nova_compute[188777]: 2026-02-19 20:36:32.785 188781 DEBUG nova.network.neutron [req-01e1f999-7952-4c29-89a7-609bcae592c9 req-b1ffcee4-2593-469f-9195-b71f3f8256e0 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Updating instance_info_cache with network_info: [{"id": "913d86d2-685f-4393-9143-efa6e9c6941a", "address": "fa:16:3e:c2:a8:ee", "network": {"id": "2194f0b2-0b56-4fa1-a2f7-0ec7651876c4", "bridge": "br-int", "label": "tempest-network-smoke--1477620676", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb9e3732b9f4456d9f90bf3e156f6f7c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap913d86d2-68", "ovs_interfaceid": "913d86d2-685f-4393-9143-efa6e9c6941a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:36:32 compute-0 nova_compute[188777]: 2026-02-19 20:36:32.807 188781 DEBUG oslo_concurrency.lockutils [req-01e1f999-7952-4c29-89a7-609bcae592c9 req-b1ffcee4-2593-469f-9195-b71f3f8256e0 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Releasing lock "refresh_cache-dff9d513-54f8-4d73-acf7-df610dc4d064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:36:33 compute-0 podman[253516]: 2026-02-19 20:36:33.406653973 +0000 UTC m=+0.088086076 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.750 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.756 188781 DEBUG nova.compute.manager [req-8adaa593-f9d4-44bd-be66-9cc66f97138e req-9b3e4a19-72ba-4615-b363-c4af174d13e7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Received event network-vif-plugged-44b4451c-db39-42a3-a2c6-5c8c42d1669b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.756 188781 DEBUG oslo_concurrency.lockutils [req-8adaa593-f9d4-44bd-be66-9cc66f97138e req-9b3e4a19-72ba-4615-b363-c4af174d13e7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "997ebdcf-7eab-485b-8fbf-d21112c78946-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.756 188781 DEBUG oslo_concurrency.lockutils [req-8adaa593-f9d4-44bd-be66-9cc66f97138e req-9b3e4a19-72ba-4615-b363-c4af174d13e7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "997ebdcf-7eab-485b-8fbf-d21112c78946-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.757 188781 DEBUG oslo_concurrency.lockutils [req-8adaa593-f9d4-44bd-be66-9cc66f97138e req-9b3e4a19-72ba-4615-b363-c4af174d13e7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "997ebdcf-7eab-485b-8fbf-d21112c78946-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.757 188781 DEBUG nova.compute.manager [req-8adaa593-f9d4-44bd-be66-9cc66f97138e req-9b3e4a19-72ba-4615-b363-c4af174d13e7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] No waiting events found dispatching network-vif-plugged-44b4451c-db39-42a3-a2c6-5c8c42d1669b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.757 188781 WARNING nova.compute.manager [req-8adaa593-f9d4-44bd-be66-9cc66f97138e req-9b3e4a19-72ba-4615-b363-c4af174d13e7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Received unexpected event network-vif-plugged-44b4451c-db39-42a3-a2c6-5c8c42d1669b for instance with vm_state active and task_state None.
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.757 188781 DEBUG nova.compute.manager [req-8adaa593-f9d4-44bd-be66-9cc66f97138e req-9b3e4a19-72ba-4615-b363-c4af174d13e7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Received event network-vif-plugged-1989fec7-60a1-41e3-bd78-56a7bdfdad64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.757 188781 DEBUG oslo_concurrency.lockutils [req-8adaa593-f9d4-44bd-be66-9cc66f97138e req-9b3e4a19-72ba-4615-b363-c4af174d13e7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "3480b144-b674-41b9-bf18-e66e647fbe86-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.757 188781 DEBUG oslo_concurrency.lockutils [req-8adaa593-f9d4-44bd-be66-9cc66f97138e req-9b3e4a19-72ba-4615-b363-c4af174d13e7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "3480b144-b674-41b9-bf18-e66e647fbe86-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.758 188781 DEBUG oslo_concurrency.lockutils [req-8adaa593-f9d4-44bd-be66-9cc66f97138e req-9b3e4a19-72ba-4615-b363-c4af174d13e7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "3480b144-b674-41b9-bf18-e66e647fbe86-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.758 188781 DEBUG nova.compute.manager [req-8adaa593-f9d4-44bd-be66-9cc66f97138e req-9b3e4a19-72ba-4615-b363-c4af174d13e7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Processing event network-vif-plugged-1989fec7-60a1-41e3-bd78-56a7bdfdad64 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.758 188781 DEBUG nova.compute.manager [req-8adaa593-f9d4-44bd-be66-9cc66f97138e req-9b3e4a19-72ba-4615-b363-c4af174d13e7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Received event network-vif-plugged-1989fec7-60a1-41e3-bd78-56a7bdfdad64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.758 188781 DEBUG oslo_concurrency.lockutils [req-8adaa593-f9d4-44bd-be66-9cc66f97138e req-9b3e4a19-72ba-4615-b363-c4af174d13e7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "3480b144-b674-41b9-bf18-e66e647fbe86-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.758 188781 DEBUG oslo_concurrency.lockutils [req-8adaa593-f9d4-44bd-be66-9cc66f97138e req-9b3e4a19-72ba-4615-b363-c4af174d13e7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "3480b144-b674-41b9-bf18-e66e647fbe86-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.758 188781 DEBUG oslo_concurrency.lockutils [req-8adaa593-f9d4-44bd-be66-9cc66f97138e req-9b3e4a19-72ba-4615-b363-c4af174d13e7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "3480b144-b674-41b9-bf18-e66e647fbe86-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.759 188781 DEBUG nova.compute.manager [req-8adaa593-f9d4-44bd-be66-9cc66f97138e req-9b3e4a19-72ba-4615-b363-c4af174d13e7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] No waiting events found dispatching network-vif-plugged-1989fec7-60a1-41e3-bd78-56a7bdfdad64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.759 188781 WARNING nova.compute.manager [req-8adaa593-f9d4-44bd-be66-9cc66f97138e req-9b3e4a19-72ba-4615-b363-c4af174d13e7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Received unexpected event network-vif-plugged-1989fec7-60a1-41e3-bd78-56a7bdfdad64 for instance with vm_state building and task_state spawning.
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.759 188781 DEBUG nova.compute.manager [req-8adaa593-f9d4-44bd-be66-9cc66f97138e req-9b3e4a19-72ba-4615-b363-c4af174d13e7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Received event network-vif-plugged-913d86d2-685f-4393-9143-efa6e9c6941a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.759 188781 DEBUG oslo_concurrency.lockutils [req-8adaa593-f9d4-44bd-be66-9cc66f97138e req-9b3e4a19-72ba-4615-b363-c4af174d13e7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "dff9d513-54f8-4d73-acf7-df610dc4d064-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.759 188781 DEBUG oslo_concurrency.lockutils [req-8adaa593-f9d4-44bd-be66-9cc66f97138e req-9b3e4a19-72ba-4615-b363-c4af174d13e7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "dff9d513-54f8-4d73-acf7-df610dc4d064-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.759 188781 DEBUG oslo_concurrency.lockutils [req-8adaa593-f9d4-44bd-be66-9cc66f97138e req-9b3e4a19-72ba-4615-b363-c4af174d13e7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "dff9d513-54f8-4d73-acf7-df610dc4d064-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.760 188781 DEBUG nova.compute.manager [req-8adaa593-f9d4-44bd-be66-9cc66f97138e req-9b3e4a19-72ba-4615-b363-c4af174d13e7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Processing event network-vif-plugged-913d86d2-685f-4393-9143-efa6e9c6941a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.760 188781 DEBUG nova.compute.manager [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Instance event wait completed in 8 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.760 188781 DEBUG nova.compute.manager [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.765 188781 DEBUG nova.virt.libvirt.driver [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.766 188781 DEBUG nova.virt.libvirt.driver [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.766 188781 DEBUG nova.virt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Emitting event <LifecycleEvent: 1771533393.7666512, 3480b144-b674-41b9-bf18-e66e647fbe86 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.767 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] VM Resumed (Lifecycle Event)
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.771 188781 INFO nova.virt.libvirt.driver [-] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Instance spawned successfully.
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.771 188781 DEBUG nova.virt.libvirt.driver [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.774 188781 INFO nova.virt.libvirt.driver [-] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Instance spawned successfully.
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.774 188781 DEBUG nova.virt.libvirt.driver [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.794 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.801 188781 DEBUG nova.virt.libvirt.driver [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.802 188781 DEBUG nova.virt.libvirt.driver [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.802 188781 DEBUG nova.virt.libvirt.driver [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.803 188781 DEBUG nova.virt.libvirt.driver [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.803 188781 DEBUG nova.virt.libvirt.driver [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.803 188781 DEBUG nova.virt.libvirt.driver [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.809 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.811 188781 DEBUG nova.virt.libvirt.driver [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.812 188781 DEBUG nova.virt.libvirt.driver [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.812 188781 DEBUG nova.virt.libvirt.driver [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.812 188781 DEBUG nova.virt.libvirt.driver [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.813 188781 DEBUG nova.virt.libvirt.driver [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.813 188781 DEBUG nova.virt.libvirt.driver [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.842 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.842 188781 DEBUG nova.virt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Emitting event <LifecycleEvent: 1771533393.7692513, dff9d513-54f8-4d73-acf7-df610dc4d064 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.842 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] VM Resumed (Lifecycle Event)
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.874 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.880 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.884 188781 INFO nova.compute.manager [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Took 16.44 seconds to spawn the instance on the hypervisor.
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.884 188781 DEBUG nova.compute.manager [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.893 188781 INFO nova.compute.manager [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Took 12.65 seconds to spawn the instance on the hypervisor.
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.893 188781 DEBUG nova.compute.manager [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:36:33 compute-0 nova_compute[188777]: 2026-02-19 20:36:33.922 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 19 20:36:34 compute-0 nova_compute[188777]: 2026-02-19 20:36:34.089 188781 INFO nova.compute.manager [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Took 17.36 seconds to build instance.
Feb 19 20:36:34 compute-0 nova_compute[188777]: 2026-02-19 20:36:34.093 188781 INFO nova.compute.manager [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Took 13.41 seconds to build instance.
Feb 19 20:36:34 compute-0 nova_compute[188777]: 2026-02-19 20:36:34.114 188781 DEBUG oslo_concurrency.lockutils [None req-b5242f90-c0ee-46e2-8fbc-cd93fb5d4f50 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Lock "3480b144-b674-41b9-bf18-e66e647fbe86" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.477s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:36:34 compute-0 nova_compute[188777]: 2026-02-19 20:36:34.117 188781 DEBUG oslo_concurrency.lockutils [None req-b1277fb4-6659-42cb-8d70-0efc0e14ecc5 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] Lock "dff9d513-54f8-4d73-acf7-df610dc4d064" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:36:34 compute-0 nova_compute[188777]: 2026-02-19 20:36:34.419 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:35 compute-0 podman[253539]: 2026-02-19 20:36:35.389296896 +0000 UTC m=+0.082480112 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0)
Feb 19 20:36:36 compute-0 nova_compute[188777]: 2026-02-19 20:36:36.026 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:36 compute-0 nova_compute[188777]: 2026-02-19 20:36:36.093 188781 DEBUG nova.compute.manager [req-97605981-3bb4-4213-9ba3-c3dbf7a3a6db req-92e0eb7e-31ae-4d9f-8e8b-b51436d096be 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Received event network-vif-plugged-913d86d2-685f-4393-9143-efa6e9c6941a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:36:36 compute-0 nova_compute[188777]: 2026-02-19 20:36:36.093 188781 DEBUG oslo_concurrency.lockutils [req-97605981-3bb4-4213-9ba3-c3dbf7a3a6db req-92e0eb7e-31ae-4d9f-8e8b-b51436d096be 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "dff9d513-54f8-4d73-acf7-df610dc4d064-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:36:36 compute-0 nova_compute[188777]: 2026-02-19 20:36:36.093 188781 DEBUG oslo_concurrency.lockutils [req-97605981-3bb4-4213-9ba3-c3dbf7a3a6db req-92e0eb7e-31ae-4d9f-8e8b-b51436d096be 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "dff9d513-54f8-4d73-acf7-df610dc4d064-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:36:36 compute-0 nova_compute[188777]: 2026-02-19 20:36:36.093 188781 DEBUG oslo_concurrency.lockutils [req-97605981-3bb4-4213-9ba3-c3dbf7a3a6db req-92e0eb7e-31ae-4d9f-8e8b-b51436d096be 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "dff9d513-54f8-4d73-acf7-df610dc4d064-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:36:36 compute-0 nova_compute[188777]: 2026-02-19 20:36:36.094 188781 DEBUG nova.compute.manager [req-97605981-3bb4-4213-9ba3-c3dbf7a3a6db req-92e0eb7e-31ae-4d9f-8e8b-b51436d096be 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] No waiting events found dispatching network-vif-plugged-913d86d2-685f-4393-9143-efa6e9c6941a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 19 20:36:36 compute-0 nova_compute[188777]: 2026-02-19 20:36:36.094 188781 WARNING nova.compute.manager [req-97605981-3bb4-4213-9ba3-c3dbf7a3a6db req-92e0eb7e-31ae-4d9f-8e8b-b51436d096be 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Received unexpected event network-vif-plugged-913d86d2-685f-4393-9143-efa6e9c6941a for instance with vm_state active and task_state None.
Feb 19 20:36:38 compute-0 nova_compute[188777]: 2026-02-19 20:36:38.109 188781 DEBUG oslo_concurrency.lockutils [None req-117bcdde-856d-4a45-b87c-97d96ebf3a68 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Acquiring lock "3480b144-b674-41b9-bf18-e66e647fbe86" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:36:38 compute-0 nova_compute[188777]: 2026-02-19 20:36:38.110 188781 DEBUG oslo_concurrency.lockutils [None req-117bcdde-856d-4a45-b87c-97d96ebf3a68 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Lock "3480b144-b674-41b9-bf18-e66e647fbe86" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:36:38 compute-0 nova_compute[188777]: 2026-02-19 20:36:38.110 188781 DEBUG oslo_concurrency.lockutils [None req-117bcdde-856d-4a45-b87c-97d96ebf3a68 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Acquiring lock "3480b144-b674-41b9-bf18-e66e647fbe86-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:36:38 compute-0 nova_compute[188777]: 2026-02-19 20:36:38.110 188781 DEBUG oslo_concurrency.lockutils [None req-117bcdde-856d-4a45-b87c-97d96ebf3a68 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Lock "3480b144-b674-41b9-bf18-e66e647fbe86-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:36:38 compute-0 nova_compute[188777]: 2026-02-19 20:36:38.110 188781 DEBUG oslo_concurrency.lockutils [None req-117bcdde-856d-4a45-b87c-97d96ebf3a68 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Lock "3480b144-b674-41b9-bf18-e66e647fbe86-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:36:38 compute-0 nova_compute[188777]: 2026-02-19 20:36:38.111 188781 INFO nova.compute.manager [None req-117bcdde-856d-4a45-b87c-97d96ebf3a68 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Terminating instance
Feb 19 20:36:38 compute-0 nova_compute[188777]: 2026-02-19 20:36:38.112 188781 DEBUG nova.compute.manager [None req-117bcdde-856d-4a45-b87c-97d96ebf3a68 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 19 20:36:38 compute-0 kernel: tap1989fec7-60 (unregistering): left promiscuous mode
Feb 19 20:36:38 compute-0 NetworkManager[57033]: <info>  [1771533398.1545] device (tap1989fec7-60): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 19 20:36:38 compute-0 nova_compute[188777]: 2026-02-19 20:36:38.165 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:38 compute-0 ovn_controller[98843]: 2026-02-19T20:36:38Z|00111|binding|INFO|Releasing lport 1989fec7-60a1-41e3-bd78-56a7bdfdad64 from this chassis (sb_readonly=0)
Feb 19 20:36:38 compute-0 ovn_controller[98843]: 2026-02-19T20:36:38Z|00112|binding|INFO|Setting lport 1989fec7-60a1-41e3-bd78-56a7bdfdad64 down in Southbound
Feb 19 20:36:38 compute-0 ovn_controller[98843]: 2026-02-19T20:36:38Z|00113|binding|INFO|Removing iface tap1989fec7-60 ovn-installed in OVS
Feb 19 20:36:38 compute-0 nova_compute[188777]: 2026-02-19 20:36:38.179 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:38 compute-0 nova_compute[188777]: 2026-02-19 20:36:38.183 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:38 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:38.206 108175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:a3:57 10.100.0.6'], port_security=['fa:16:3e:f9:a3:57 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3480b144-b674-41b9-bf18-e66e647fbe86', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-580376f8-802f-4dbc-bd1f-d9322eb2ee71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df02c0da56494e34a2e958e403cdd24b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3d2eccda-2a88-439e-94ad-5b6196dcbf09', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=20ea14a6-132c-4469-8f24-d77696552e1a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>], logical_port=1989fec7-60a1-41e3-bd78-56a7bdfdad64) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 19 20:36:38 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:38.208 108175 INFO neutron.agent.ovn.metadata.agent [-] Port 1989fec7-60a1-41e3-bd78-56a7bdfdad64 in datapath 580376f8-802f-4dbc-bd1f-d9322eb2ee71 unbound from our chassis
Feb 19 20:36:38 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:38.210 108175 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 580376f8-802f-4dbc-bd1f-d9322eb2ee71, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 19 20:36:38 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Feb 19 20:36:38 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:38.211 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[efc48df1-ef0f-40b1-97ba-8201879d90d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:38 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:38.212 108175 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-580376f8-802f-4dbc-bd1f-d9322eb2ee71 namespace which is not needed anymore
Feb 19 20:36:38 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 4.747s CPU time.
Feb 19 20:36:38 compute-0 systemd-machined[158158]: Machine qemu-10-instance-0000000a terminated.
Feb 19 20:36:38 compute-0 podman[253559]: 2026-02-19 20:36:38.278850589 +0000 UTC m=+0.100587406 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, container_name=ovn_controller, managed_by=edpm_ansible)
Feb 19 20:36:38 compute-0 nova_compute[188777]: 2026-02-19 20:36:38.339 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:38 compute-0 nova_compute[188777]: 2026-02-19 20:36:38.345 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:38 compute-0 nova_compute[188777]: 2026-02-19 20:36:38.366 188781 INFO nova.virt.libvirt.driver [-] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Instance destroyed successfully.
Feb 19 20:36:38 compute-0 nova_compute[188777]: 2026-02-19 20:36:38.366 188781 DEBUG nova.objects.instance [None req-117bcdde-856d-4a45-b87c-97d96ebf3a68 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Lazy-loading 'resources' on Instance uuid 3480b144-b674-41b9-bf18-e66e647fbe86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:36:38 compute-0 nova_compute[188777]: 2026-02-19 20:36:38.404 188781 DEBUG nova.virt.libvirt.vif [None req-117bcdde-856d-4a45-b87c-97d96ebf3a68 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-19T20:36:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1666570736',display_name='tempest-ServerAddressesTestJSON-server-1666570736',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1666570736',id=10,image_ref='17b9bce8-a91b-495d-ac33-cf63893413f9',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-19T20:36:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='df02c0da56494e34a2e958e403cdd24b',ramdisk_id='',reservation_id='r-3ze12ez9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='17b9bce8-a91b-495d-ac33-cf63893413f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-1073436659',owner_user_name='tempest-ServerAddressesTestJSON-1073436659-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-19T20:36:33Z,user_data=None,user_id='cafdfae88326444da09076b7e3156d58',uuid=3480b144-b674-41b9-bf18-e66e647fbe86,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1989fec7-60a1-41e3-bd78-56a7bdfdad64", "address": "fa:16:3e:f9:a3:57", "network": {"id": "580376f8-802f-4dbc-bd1f-d9322eb2ee71", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1533329539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df02c0da56494e34a2e958e403cdd24b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1989fec7-60", "ovs_interfaceid": "1989fec7-60a1-41e3-bd78-56a7bdfdad64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 19 20:36:38 compute-0 nova_compute[188777]: 2026-02-19 20:36:38.404 188781 DEBUG nova.network.os_vif_util [None req-117bcdde-856d-4a45-b87c-97d96ebf3a68 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Converting VIF {"id": "1989fec7-60a1-41e3-bd78-56a7bdfdad64", "address": "fa:16:3e:f9:a3:57", "network": {"id": "580376f8-802f-4dbc-bd1f-d9322eb2ee71", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1533329539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df02c0da56494e34a2e958e403cdd24b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1989fec7-60", "ovs_interfaceid": "1989fec7-60a1-41e3-bd78-56a7bdfdad64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 19 20:36:38 compute-0 nova_compute[188777]: 2026-02-19 20:36:38.405 188781 DEBUG nova.network.os_vif_util [None req-117bcdde-856d-4a45-b87c-97d96ebf3a68 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:a3:57,bridge_name='br-int',has_traffic_filtering=True,id=1989fec7-60a1-41e3-bd78-56a7bdfdad64,network=Network(580376f8-802f-4dbc-bd1f-d9322eb2ee71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1989fec7-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 19 20:36:38 compute-0 nova_compute[188777]: 2026-02-19 20:36:38.405 188781 DEBUG os_vif [None req-117bcdde-856d-4a45-b87c-97d96ebf3a68 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:a3:57,bridge_name='br-int',has_traffic_filtering=True,id=1989fec7-60a1-41e3-bd78-56a7bdfdad64,network=Network(580376f8-802f-4dbc-bd1f-d9322eb2ee71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1989fec7-60') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 19 20:36:38 compute-0 nova_compute[188777]: 2026-02-19 20:36:38.406 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:38 compute-0 nova_compute[188777]: 2026-02-19 20:36:38.407 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1989fec7-60, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:36:38 compute-0 nova_compute[188777]: 2026-02-19 20:36:38.408 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:38 compute-0 nova_compute[188777]: 2026-02-19 20:36:38.410 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 19 20:36:38 compute-0 nova_compute[188777]: 2026-02-19 20:36:38.412 188781 INFO os_vif [None req-117bcdde-856d-4a45-b87c-97d96ebf3a68 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:a3:57,bridge_name='br-int',has_traffic_filtering=True,id=1989fec7-60a1-41e3-bd78-56a7bdfdad64,network=Network(580376f8-802f-4dbc-bd1f-d9322eb2ee71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1989fec7-60')
Feb 19 20:36:38 compute-0 nova_compute[188777]: 2026-02-19 20:36:38.412 188781 INFO nova.virt.libvirt.driver [None req-117bcdde-856d-4a45-b87c-97d96ebf3a68 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Deleting instance files /var/lib/nova/instances/3480b144-b674-41b9-bf18-e66e647fbe86_del
Feb 19 20:36:38 compute-0 nova_compute[188777]: 2026-02-19 20:36:38.413 188781 INFO nova.virt.libvirt.driver [None req-117bcdde-856d-4a45-b87c-97d96ebf3a68 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Deletion of /var/lib/nova/instances/3480b144-b674-41b9-bf18-e66e647fbe86_del complete
Feb 19 20:36:38 compute-0 neutron-haproxy-ovnmeta-580376f8-802f-4dbc-bd1f-d9322eb2ee71[253339]: [NOTICE]   (253344) : haproxy version is 2.8.14-c23fe91
Feb 19 20:36:38 compute-0 neutron-haproxy-ovnmeta-580376f8-802f-4dbc-bd1f-d9322eb2ee71[253339]: [NOTICE]   (253344) : path to executable is /usr/sbin/haproxy
Feb 19 20:36:38 compute-0 neutron-haproxy-ovnmeta-580376f8-802f-4dbc-bd1f-d9322eb2ee71[253339]: [WARNING]  (253344) : Exiting Master process...
Feb 19 20:36:38 compute-0 neutron-haproxy-ovnmeta-580376f8-802f-4dbc-bd1f-d9322eb2ee71[253339]: [WARNING]  (253344) : Exiting Master process...
Feb 19 20:36:38 compute-0 neutron-haproxy-ovnmeta-580376f8-802f-4dbc-bd1f-d9322eb2ee71[253339]: [ALERT]    (253344) : Current worker (253346) exited with code 143 (Terminated)
Feb 19 20:36:38 compute-0 neutron-haproxy-ovnmeta-580376f8-802f-4dbc-bd1f-d9322eb2ee71[253339]: [WARNING]  (253344) : All workers exited. Exiting... (0)
Feb 19 20:36:38 compute-0 systemd[1]: libpod-901a03cde0b9e1594c4150690c4ff7641d5c1197a12f8e018a730e37cfca33fe.scope: Deactivated successfully.
Feb 19 20:36:38 compute-0 conmon[253339]: conmon 901a03cde0b9e1594c41 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-901a03cde0b9e1594c4150690c4ff7641d5c1197a12f8e018a730e37cfca33fe.scope/container/memory.events
Feb 19 20:36:38 compute-0 podman[253605]: 2026-02-19 20:36:38.501934103 +0000 UTC m=+0.180346063 container died 901a03cde0b9e1594c4150690c4ff7641d5c1197a12f8e018a730e37cfca33fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-580376f8-802f-4dbc-bd1f-d9322eb2ee71, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 19 20:36:38 compute-0 nova_compute[188777]: 2026-02-19 20:36:38.504 188781 INFO nova.compute.manager [None req-117bcdde-856d-4a45-b87c-97d96ebf3a68 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Took 0.39 seconds to destroy the instance on the hypervisor.
Feb 19 20:36:38 compute-0 nova_compute[188777]: 2026-02-19 20:36:38.505 188781 DEBUG oslo.service.loopingcall [None req-117bcdde-856d-4a45-b87c-97d96ebf3a68 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 19 20:36:38 compute-0 nova_compute[188777]: 2026-02-19 20:36:38.505 188781 DEBUG nova.compute.manager [-] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 19 20:36:38 compute-0 nova_compute[188777]: 2026-02-19 20:36:38.505 188781 DEBUG nova.network.neutron [-] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 19 20:36:38 compute-0 nova_compute[188777]: 2026-02-19 20:36:38.571 188781 DEBUG nova.compute.manager [req-75ce2b47-dd46-4c1f-8500-473f34a619f9 req-1c0a32a7-9c41-4e9a-a418-25d63b9bb2f3 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Received event network-changed-44b4451c-db39-42a3-a2c6-5c8c42d1669b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:36:38 compute-0 nova_compute[188777]: 2026-02-19 20:36:38.571 188781 DEBUG nova.compute.manager [req-75ce2b47-dd46-4c1f-8500-473f34a619f9 req-1c0a32a7-9c41-4e9a-a418-25d63b9bb2f3 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Refreshing instance network info cache due to event network-changed-44b4451c-db39-42a3-a2c6-5c8c42d1669b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 19 20:36:38 compute-0 nova_compute[188777]: 2026-02-19 20:36:38.571 188781 DEBUG oslo_concurrency.lockutils [req-75ce2b47-dd46-4c1f-8500-473f34a619f9 req-1c0a32a7-9c41-4e9a-a418-25d63b9bb2f3 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "refresh_cache-997ebdcf-7eab-485b-8fbf-d21112c78946" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:36:38 compute-0 nova_compute[188777]: 2026-02-19 20:36:38.571 188781 DEBUG oslo_concurrency.lockutils [req-75ce2b47-dd46-4c1f-8500-473f34a619f9 req-1c0a32a7-9c41-4e9a-a418-25d63b9bb2f3 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquired lock "refresh_cache-997ebdcf-7eab-485b-8fbf-d21112c78946" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:36:38 compute-0 nova_compute[188777]: 2026-02-19 20:36:38.571 188781 DEBUG nova.network.neutron [req-75ce2b47-dd46-4c1f-8500-473f34a619f9 req-1c0a32a7-9c41-4e9a-a418-25d63b9bb2f3 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Refreshing network info cache for port 44b4451c-db39-42a3-a2c6-5c8c42d1669b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 19 20:36:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec304604edd55715d8e9383b695c6959dbeecaeea07923a8ed5cc4a27553d7d8-merged.mount: Deactivated successfully.
Feb 19 20:36:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-901a03cde0b9e1594c4150690c4ff7641d5c1197a12f8e018a730e37cfca33fe-userdata-shm.mount: Deactivated successfully.
Feb 19 20:36:38 compute-0 podman[253605]: 2026-02-19 20:36:38.592743484 +0000 UTC m=+0.271155424 container cleanup 901a03cde0b9e1594c4150690c4ff7641d5c1197a12f8e018a730e37cfca33fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-580376f8-802f-4dbc-bd1f-d9322eb2ee71, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Feb 19 20:36:38 compute-0 systemd[1]: libpod-conmon-901a03cde0b9e1594c4150690c4ff7641d5c1197a12f8e018a730e37cfca33fe.scope: Deactivated successfully.
Feb 19 20:36:38 compute-0 podman[253642]: 2026-02-19 20:36:38.698612814 +0000 UTC m=+0.086441625 container remove 901a03cde0b9e1594c4150690c4ff7641d5c1197a12f8e018a730e37cfca33fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-580376f8-802f-4dbc-bd1f-d9322eb2ee71, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 19 20:36:38 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:38.704 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[ef7cdece-039a-4680-9e88-aa23c26a4e4c]: (4, ('Thu Feb 19 08:36:38 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-580376f8-802f-4dbc-bd1f-d9322eb2ee71 (901a03cde0b9e1594c4150690c4ff7641d5c1197a12f8e018a730e37cfca33fe)\n901a03cde0b9e1594c4150690c4ff7641d5c1197a12f8e018a730e37cfca33fe\nThu Feb 19 08:36:38 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-580376f8-802f-4dbc-bd1f-d9322eb2ee71 (901a03cde0b9e1594c4150690c4ff7641d5c1197a12f8e018a730e37cfca33fe)\n901a03cde0b9e1594c4150690c4ff7641d5c1197a12f8e018a730e37cfca33fe\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:38 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:38.706 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[080a6467-62cf-4caa-99d7-4dead7ff1e09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:38 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:38.707 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap580376f8-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:36:38 compute-0 kernel: tap580376f8-80: left promiscuous mode
Feb 19 20:36:38 compute-0 nova_compute[188777]: 2026-02-19 20:36:38.710 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:38 compute-0 nova_compute[188777]: 2026-02-19 20:36:38.717 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:38 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:38.721 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[ecd41687-c582-4472-b470-9d3db32cd8ca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:38 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:38.741 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[3e739ca0-0135-4570-90e5-c57988bd93da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:38 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:38.743 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[ade663b8-390d-4c5e-ab48-7baf99d59eea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:38 compute-0 nova_compute[188777]: 2026-02-19 20:36:38.751 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:38 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:38.756 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[aa8688dc-bf60-4a10-82f3-9a4e47f82bc9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489691, 'reachable_time': 42272, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253658, 'error': None, 'target': 'ovnmeta-580376f8-802f-4dbc-bd1f-d9322eb2ee71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:38 compute-0 systemd[1]: run-netns-ovnmeta\x2d580376f8\x2d802f\x2d4dbc\x2dbd1f\x2dd9322eb2ee71.mount: Deactivated successfully.
Feb 19 20:36:38 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:38.759 108698 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-580376f8-802f-4dbc-bd1f-d9322eb2ee71 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 19 20:36:38 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:38.759 108698 DEBUG oslo.privsep.daemon [-] privsep: reply[33b44a29-2f75-40a1-bc54-bcee6335e4d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:39 compute-0 nova_compute[188777]: 2026-02-19 20:36:39.804 188781 DEBUG nova.compute.manager [req-6d439347-185f-46c2-8eeb-d208941398f6 req-899e6581-253c-438a-87ce-0f65dcf8abd8 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Received event network-changed-913d86d2-685f-4393-9143-efa6e9c6941a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:36:39 compute-0 nova_compute[188777]: 2026-02-19 20:36:39.804 188781 DEBUG nova.compute.manager [req-6d439347-185f-46c2-8eeb-d208941398f6 req-899e6581-253c-438a-87ce-0f65dcf8abd8 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Refreshing instance network info cache due to event network-changed-913d86d2-685f-4393-9143-efa6e9c6941a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 19 20:36:39 compute-0 nova_compute[188777]: 2026-02-19 20:36:39.805 188781 DEBUG oslo_concurrency.lockutils [req-6d439347-185f-46c2-8eeb-d208941398f6 req-899e6581-253c-438a-87ce-0f65dcf8abd8 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "refresh_cache-dff9d513-54f8-4d73-acf7-df610dc4d064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:36:39 compute-0 nova_compute[188777]: 2026-02-19 20:36:39.805 188781 DEBUG oslo_concurrency.lockutils [req-6d439347-185f-46c2-8eeb-d208941398f6 req-899e6581-253c-438a-87ce-0f65dcf8abd8 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquired lock "refresh_cache-dff9d513-54f8-4d73-acf7-df610dc4d064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:36:39 compute-0 nova_compute[188777]: 2026-02-19 20:36:39.805 188781 DEBUG nova.network.neutron [req-6d439347-185f-46c2-8eeb-d208941398f6 req-899e6581-253c-438a-87ce-0f65dcf8abd8 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Refreshing network info cache for port 913d86d2-685f-4393-9143-efa6e9c6941a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 19 20:36:40 compute-0 nova_compute[188777]: 2026-02-19 20:36:40.042 188781 DEBUG nova.network.neutron [-] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:36:40 compute-0 nova_compute[188777]: 2026-02-19 20:36:40.058 188781 INFO nova.compute.manager [-] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Took 1.55 seconds to deallocate network for instance.
Feb 19 20:36:40 compute-0 nova_compute[188777]: 2026-02-19 20:36:40.104 188781 DEBUG oslo_concurrency.lockutils [None req-117bcdde-856d-4a45-b87c-97d96ebf3a68 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:36:40 compute-0 nova_compute[188777]: 2026-02-19 20:36:40.105 188781 DEBUG oslo_concurrency.lockutils [None req-117bcdde-856d-4a45-b87c-97d96ebf3a68 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:36:40 compute-0 nova_compute[188777]: 2026-02-19 20:36:40.248 188781 DEBUG nova.compute.provider_tree [None req-117bcdde-856d-4a45-b87c-97d96ebf3a68 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:36:40 compute-0 nova_compute[188777]: 2026-02-19 20:36:40.268 188781 DEBUG nova.scheduler.client.report [None req-117bcdde-856d-4a45-b87c-97d96ebf3a68 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:36:40 compute-0 nova_compute[188777]: 2026-02-19 20:36:40.294 188781 DEBUG oslo_concurrency.lockutils [None req-117bcdde-856d-4a45-b87c-97d96ebf3a68 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.189s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:36:40 compute-0 nova_compute[188777]: 2026-02-19 20:36:40.338 188781 INFO nova.scheduler.client.report [None req-117bcdde-856d-4a45-b87c-97d96ebf3a68 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Deleted allocations for instance 3480b144-b674-41b9-bf18-e66e647fbe86
Feb 19 20:36:40 compute-0 nova_compute[188777]: 2026-02-19 20:36:40.421 188781 DEBUG oslo_concurrency.lockutils [None req-117bcdde-856d-4a45-b87c-97d96ebf3a68 cafdfae88326444da09076b7e3156d58 df02c0da56494e34a2e958e403cdd24b - - default default] Lock "3480b144-b674-41b9-bf18-e66e647fbe86" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.311s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:36:40 compute-0 nova_compute[188777]: 2026-02-19 20:36:40.645 188781 DEBUG nova.network.neutron [req-75ce2b47-dd46-4c1f-8500-473f34a619f9 req-1c0a32a7-9c41-4e9a-a418-25d63b9bb2f3 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Updated VIF entry in instance network info cache for port 44b4451c-db39-42a3-a2c6-5c8c42d1669b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 19 20:36:40 compute-0 nova_compute[188777]: 2026-02-19 20:36:40.646 188781 DEBUG nova.network.neutron [req-75ce2b47-dd46-4c1f-8500-473f34a619f9 req-1c0a32a7-9c41-4e9a-a418-25d63b9bb2f3 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Updating instance_info_cache with network_info: [{"id": "44b4451c-db39-42a3-a2c6-5c8c42d1669b", "address": "fa:16:3e:f7:60:ee", "network": {"id": "ef3fe901-c03c-42fd-97b9-c1f0218f248b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-572210270-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54ce0de2bf12421a9458013ccaa2dcad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44b4451c-db", "ovs_interfaceid": "44b4451c-db39-42a3-a2c6-5c8c42d1669b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:36:40 compute-0 nova_compute[188777]: 2026-02-19 20:36:40.673 188781 DEBUG oslo_concurrency.lockutils [req-75ce2b47-dd46-4c1f-8500-473f34a619f9 req-1c0a32a7-9c41-4e9a-a418-25d63b9bb2f3 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Releasing lock "refresh_cache-997ebdcf-7eab-485b-8fbf-d21112c78946" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:36:40 compute-0 nova_compute[188777]: 2026-02-19 20:36:40.722 188781 DEBUG nova.compute.manager [req-08174b8e-94e4-4166-a052-c08f8318a6e0 req-bdef4691-5727-41f3-b135-2a47904ffde4 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Received event network-vif-deleted-1989fec7-60a1-41e3-bd78-56a7bdfdad64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:36:42 compute-0 nova_compute[188777]: 2026-02-19 20:36:42.121 188781 DEBUG nova.network.neutron [req-6d439347-185f-46c2-8eeb-d208941398f6 req-899e6581-253c-438a-87ce-0f65dcf8abd8 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Updated VIF entry in instance network info cache for port 913d86d2-685f-4393-9143-efa6e9c6941a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 19 20:36:42 compute-0 nova_compute[188777]: 2026-02-19 20:36:42.122 188781 DEBUG nova.network.neutron [req-6d439347-185f-46c2-8eeb-d208941398f6 req-899e6581-253c-438a-87ce-0f65dcf8abd8 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Updating instance_info_cache with network_info: [{"id": "913d86d2-685f-4393-9143-efa6e9c6941a", "address": "fa:16:3e:c2:a8:ee", "network": {"id": "2194f0b2-0b56-4fa1-a2f7-0ec7651876c4", "bridge": "br-int", "label": "tempest-network-smoke--1477620676", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb9e3732b9f4456d9f90bf3e156f6f7c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap913d86d2-68", "ovs_interfaceid": "913d86d2-685f-4393-9143-efa6e9c6941a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:36:42 compute-0 nova_compute[188777]: 2026-02-19 20:36:42.149 188781 DEBUG oslo_concurrency.lockutils [req-6d439347-185f-46c2-8eeb-d208941398f6 req-899e6581-253c-438a-87ce-0f65dcf8abd8 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Releasing lock "refresh_cache-dff9d513-54f8-4d73-acf7-df610dc4d064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:36:43 compute-0 nova_compute[188777]: 2026-02-19 20:36:43.410 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:43 compute-0 nova_compute[188777]: 2026-02-19 20:36:43.753 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:45 compute-0 nova_compute[188777]: 2026-02-19 20:36:45.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:36:45 compute-0 nova_compute[188777]: 2026-02-19 20:36:45.264 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:36:46 compute-0 sshd-session[253659]: Received disconnect from 103.179.56.24 port 43666:11: Bye Bye [preauth]
Feb 19 20:36:46 compute-0 sshd-session[253659]: Disconnected from authenticating user root 103.179.56.24 port 43666 [preauth]
Feb 19 20:36:46 compute-0 ovn_controller[98843]: 2026-02-19T20:36:46Z|00114|binding|INFO|Releasing lport c4f25fb9-c5df-4323-a436-ca67d28f2bc3 from this chassis (sb_readonly=0)
Feb 19 20:36:46 compute-0 ovn_controller[98843]: 2026-02-19T20:36:46Z|00115|binding|INFO|Releasing lport 55b38ec7-c28d-4985-87ac-ac8d24f4e97c from this chassis (sb_readonly=0)
Feb 19 20:36:46 compute-0 ovn_controller[98843]: 2026-02-19T20:36:46Z|00116|binding|INFO|Releasing lport a514a3b0-3622-43cb-93f5-1ce2f2eacb84 from this chassis (sb_readonly=0)
Feb 19 20:36:47 compute-0 nova_compute[188777]: 2026-02-19 20:36:47.025 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:48 compute-0 nova_compute[188777]: 2026-02-19 20:36:48.414 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:48 compute-0 nova_compute[188777]: 2026-02-19 20:36:48.756 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:49 compute-0 nova_compute[188777]: 2026-02-19 20:36:49.371 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:49 compute-0 sshd-session[253661]: Invalid user vyos from 83.235.16.111 port 42050
Feb 19 20:36:49 compute-0 nova_compute[188777]: 2026-02-19 20:36:49.756 188781 DEBUG oslo_concurrency.lockutils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Acquiring lock "1b6b1397-fda7-4470-883b-1cc5974fac84" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:36:49 compute-0 nova_compute[188777]: 2026-02-19 20:36:49.757 188781 DEBUG oslo_concurrency.lockutils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lock "1b6b1397-fda7-4470-883b-1cc5974fac84" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:36:49 compute-0 nova_compute[188777]: 2026-02-19 20:36:49.781 188781 DEBUG nova.compute.manager [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 19 20:36:49 compute-0 nova_compute[188777]: 2026-02-19 20:36:49.869 188781 DEBUG oslo_concurrency.lockutils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:36:49 compute-0 nova_compute[188777]: 2026-02-19 20:36:49.870 188781 DEBUG oslo_concurrency.lockutils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:36:49 compute-0 nova_compute[188777]: 2026-02-19 20:36:49.883 188781 DEBUG nova.virt.hardware [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 19 20:36:49 compute-0 nova_compute[188777]: 2026-02-19 20:36:49.884 188781 INFO nova.compute.claims [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Claim successful on node compute-0.ctlplane.example.com
Feb 19 20:36:49 compute-0 sshd-session[253661]: Received disconnect from 83.235.16.111 port 42050:11: Bye Bye [preauth]
Feb 19 20:36:49 compute-0 sshd-session[253661]: Disconnected from invalid user vyos 83.235.16.111 port 42050 [preauth]
Feb 19 20:36:50 compute-0 nova_compute[188777]: 2026-02-19 20:36:50.047 188781 DEBUG nova.compute.provider_tree [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:36:50 compute-0 nova_compute[188777]: 2026-02-19 20:36:50.066 188781 DEBUG nova.scheduler.client.report [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:36:50 compute-0 nova_compute[188777]: 2026-02-19 20:36:50.085 188781 DEBUG oslo_concurrency.lockutils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.215s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:36:50 compute-0 nova_compute[188777]: 2026-02-19 20:36:50.086 188781 DEBUG nova.compute.manager [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 19 20:36:50 compute-0 nova_compute[188777]: 2026-02-19 20:36:50.131 188781 DEBUG nova.compute.manager [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 19 20:36:50 compute-0 nova_compute[188777]: 2026-02-19 20:36:50.132 188781 DEBUG nova.network.neutron [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 19 20:36:50 compute-0 nova_compute[188777]: 2026-02-19 20:36:50.151 188781 INFO nova.virt.libvirt.driver [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 19 20:36:50 compute-0 nova_compute[188777]: 2026-02-19 20:36:50.173 188781 DEBUG nova.compute.manager [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 19 20:36:50 compute-0 nova_compute[188777]: 2026-02-19 20:36:50.261 188781 DEBUG nova.compute.manager [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 19 20:36:50 compute-0 nova_compute[188777]: 2026-02-19 20:36:50.262 188781 DEBUG nova.virt.libvirt.driver [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 19 20:36:50 compute-0 nova_compute[188777]: 2026-02-19 20:36:50.263 188781 INFO nova.virt.libvirt.driver [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Creating image(s)
Feb 19 20:36:50 compute-0 nova_compute[188777]: 2026-02-19 20:36:50.263 188781 DEBUG oslo_concurrency.lockutils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Acquiring lock "/var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:36:50 compute-0 nova_compute[188777]: 2026-02-19 20:36:50.264 188781 DEBUG oslo_concurrency.lockutils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lock "/var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:36:50 compute-0 nova_compute[188777]: 2026-02-19 20:36:50.265 188781 DEBUG oslo_concurrency.lockutils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lock "/var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:36:50 compute-0 nova_compute[188777]: 2026-02-19 20:36:50.265 188781 DEBUG oslo_concurrency.lockutils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Acquiring lock "c4978917f5870b26b06a12225871f7dbd3da64fb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:36:50 compute-0 nova_compute[188777]: 2026-02-19 20:36:50.266 188781 DEBUG oslo_concurrency.lockutils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lock "c4978917f5870b26b06a12225871f7dbd3da64fb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:36:50 compute-0 nova_compute[188777]: 2026-02-19 20:36:50.601 188781 DEBUG nova.policy [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4495bf20aedd42ff97fdae62ef729522', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3e54c3b3dadc42fca16da4cb7212a2db', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 19 20:36:51 compute-0 nova_compute[188777]: 2026-02-19 20:36:51.962 188781 DEBUG oslo_concurrency.processutils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c4978917f5870b26b06a12225871f7dbd3da64fb.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:36:52 compute-0 nova_compute[188777]: 2026-02-19 20:36:52.021 188781 DEBUG oslo_concurrency.processutils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c4978917f5870b26b06a12225871f7dbd3da64fb.part --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:36:52 compute-0 nova_compute[188777]: 2026-02-19 20:36:52.023 188781 DEBUG nova.virt.images [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] e98a7b34-d7ef-4dcd-b1f3-0a369d480f18 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Feb 19 20:36:52 compute-0 nova_compute[188777]: 2026-02-19 20:36:52.025 188781 DEBUG nova.privsep.utils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Feb 19 20:36:52 compute-0 nova_compute[188777]: 2026-02-19 20:36:52.025 188781 DEBUG oslo_concurrency.processutils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/c4978917f5870b26b06a12225871f7dbd3da64fb.part /var/lib/nova/instances/_base/c4978917f5870b26b06a12225871f7dbd3da64fb.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:36:52 compute-0 nova_compute[188777]: 2026-02-19 20:36:52.280 188781 DEBUG oslo_concurrency.processutils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/c4978917f5870b26b06a12225871f7dbd3da64fb.part /var/lib/nova/instances/_base/c4978917f5870b26b06a12225871f7dbd3da64fb.converted" returned: 0 in 0.255s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:36:52 compute-0 nova_compute[188777]: 2026-02-19 20:36:52.291 188781 DEBUG oslo_concurrency.processutils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c4978917f5870b26b06a12225871f7dbd3da64fb.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:36:52 compute-0 nova_compute[188777]: 2026-02-19 20:36:52.352 188781 DEBUG oslo_concurrency.processutils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c4978917f5870b26b06a12225871f7dbd3da64fb.converted --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:36:52 compute-0 nova_compute[188777]: 2026-02-19 20:36:52.353 188781 DEBUG oslo_concurrency.lockutils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lock "c4978917f5870b26b06a12225871f7dbd3da64fb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.088s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:36:52 compute-0 nova_compute[188777]: 2026-02-19 20:36:52.372 188781 DEBUG oslo_concurrency.processutils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c4978917f5870b26b06a12225871f7dbd3da64fb --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:36:52 compute-0 podman[253674]: 2026-02-19 20:36:52.37489103 +0000 UTC m=+0.059790275 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 19 20:36:52 compute-0 podman[253672]: 2026-02-19 20:36:52.405029279 +0000 UTC m=+0.093661711 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, architecture=x86_64, managed_by=edpm_ansible, name=ubi9/ubi-minimal, io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Feb 19 20:36:52 compute-0 nova_compute[188777]: 2026-02-19 20:36:52.431 188781 DEBUG oslo_concurrency.processutils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c4978917f5870b26b06a12225871f7dbd3da64fb --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:36:52 compute-0 nova_compute[188777]: 2026-02-19 20:36:52.432 188781 DEBUG oslo_concurrency.lockutils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Acquiring lock "c4978917f5870b26b06a12225871f7dbd3da64fb" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:36:52 compute-0 nova_compute[188777]: 2026-02-19 20:36:52.433 188781 DEBUG oslo_concurrency.lockutils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lock "c4978917f5870b26b06a12225871f7dbd3da64fb" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:36:52 compute-0 nova_compute[188777]: 2026-02-19 20:36:52.446 188781 DEBUG oslo_concurrency.processutils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c4978917f5870b26b06a12225871f7dbd3da64fb --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:36:52 compute-0 nova_compute[188777]: 2026-02-19 20:36:52.460 188781 DEBUG nova.network.neutron [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Successfully created port: 3b9e0369-31ef-4446-b291-70f0cbddeb63 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 19 20:36:52 compute-0 nova_compute[188777]: 2026-02-19 20:36:52.495 188781 DEBUG oslo_concurrency.processutils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c4978917f5870b26b06a12225871f7dbd3da64fb --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:36:52 compute-0 nova_compute[188777]: 2026-02-19 20:36:52.496 188781 DEBUG oslo_concurrency.processutils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c4978917f5870b26b06a12225871f7dbd3da64fb,backing_fmt=raw /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:36:52 compute-0 nova_compute[188777]: 2026-02-19 20:36:52.534 188781 DEBUG oslo_concurrency.processutils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c4978917f5870b26b06a12225871f7dbd3da64fb,backing_fmt=raw /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk 1073741824" returned: 0 in 0.038s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:36:52 compute-0 nova_compute[188777]: 2026-02-19 20:36:52.535 188781 DEBUG oslo_concurrency.lockutils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lock "c4978917f5870b26b06a12225871f7dbd3da64fb" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:36:52 compute-0 nova_compute[188777]: 2026-02-19 20:36:52.536 188781 DEBUG oslo_concurrency.processutils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c4978917f5870b26b06a12225871f7dbd3da64fb --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:36:52 compute-0 nova_compute[188777]: 2026-02-19 20:36:52.584 188781 DEBUG oslo_concurrency.processutils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c4978917f5870b26b06a12225871f7dbd3da64fb --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:36:52 compute-0 nova_compute[188777]: 2026-02-19 20:36:52.585 188781 DEBUG nova.virt.disk.api [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Checking if we can resize image /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Feb 19 20:36:52 compute-0 nova_compute[188777]: 2026-02-19 20:36:52.586 188781 DEBUG oslo_concurrency.processutils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:36:52 compute-0 nova_compute[188777]: 2026-02-19 20:36:52.641 188781 DEBUG oslo_concurrency.processutils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:36:52 compute-0 nova_compute[188777]: 2026-02-19 20:36:52.642 188781 DEBUG nova.virt.disk.api [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Cannot resize image /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Feb 19 20:36:52 compute-0 nova_compute[188777]: 2026-02-19 20:36:52.643 188781 DEBUG nova.objects.instance [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lazy-loading 'migration_context' on Instance uuid 1b6b1397-fda7-4470-883b-1cc5974fac84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:36:52 compute-0 nova_compute[188777]: 2026-02-19 20:36:52.666 188781 DEBUG nova.virt.libvirt.driver [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 19 20:36:52 compute-0 nova_compute[188777]: 2026-02-19 20:36:52.667 188781 DEBUG nova.virt.libvirt.driver [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Ensure instance console log exists: /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 19 20:36:52 compute-0 nova_compute[188777]: 2026-02-19 20:36:52.668 188781 DEBUG oslo_concurrency.lockutils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:36:52 compute-0 nova_compute[188777]: 2026-02-19 20:36:52.668 188781 DEBUG oslo_concurrency.lockutils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:36:52 compute-0 nova_compute[188777]: 2026-02-19 20:36:52.669 188781 DEBUG oslo_concurrency.lockutils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:36:53 compute-0 nova_compute[188777]: 2026-02-19 20:36:53.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:36:53 compute-0 nova_compute[188777]: 2026-02-19 20:36:53.266 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:36:53 compute-0 nova_compute[188777]: 2026-02-19 20:36:53.362 188781 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1771533398.3609748, 3480b144-b674-41b9-bf18-e66e647fbe86 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:36:53 compute-0 nova_compute[188777]: 2026-02-19 20:36:53.363 188781 INFO nova.compute.manager [-] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] VM Stopped (Lifecycle Event)
Feb 19 20:36:53 compute-0 nova_compute[188777]: 2026-02-19 20:36:53.387 188781 DEBUG nova.compute.manager [None req-56da8067-0cda-4b93-b3c1-d7039b0acf73 - - - - - -] [instance: 3480b144-b674-41b9-bf18-e66e647fbe86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:36:53 compute-0 nova_compute[188777]: 2026-02-19 20:36:53.417 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:53 compute-0 nova_compute[188777]: 2026-02-19 20:36:53.759 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:54 compute-0 nova_compute[188777]: 2026-02-19 20:36:54.808 188781 DEBUG nova.network.neutron [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Successfully updated port: 3b9e0369-31ef-4446-b291-70f0cbddeb63 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 19 20:36:54 compute-0 nova_compute[188777]: 2026-02-19 20:36:54.825 188781 DEBUG oslo_concurrency.lockutils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Acquiring lock "refresh_cache-1b6b1397-fda7-4470-883b-1cc5974fac84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:36:54 compute-0 nova_compute[188777]: 2026-02-19 20:36:54.826 188781 DEBUG oslo_concurrency.lockutils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Acquired lock "refresh_cache-1b6b1397-fda7-4470-883b-1cc5974fac84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:36:54 compute-0 nova_compute[188777]: 2026-02-19 20:36:54.827 188781 DEBUG nova.network.neutron [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 19 20:36:55 compute-0 nova_compute[188777]: 2026-02-19 20:36:55.078 188781 DEBUG nova.compute.manager [req-66d0d339-56ad-4ddd-9485-ec0733685d6b req-fbeded93-eea8-4914-9836-050f7e808dd9 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Received event network-changed-3b9e0369-31ef-4446-b291-70f0cbddeb63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:36:55 compute-0 nova_compute[188777]: 2026-02-19 20:36:55.079 188781 DEBUG nova.compute.manager [req-66d0d339-56ad-4ddd-9485-ec0733685d6b req-fbeded93-eea8-4914-9836-050f7e808dd9 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Refreshing instance network info cache due to event network-changed-3b9e0369-31ef-4446-b291-70f0cbddeb63. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 19 20:36:55 compute-0 nova_compute[188777]: 2026-02-19 20:36:55.080 188781 DEBUG oslo_concurrency.lockutils [req-66d0d339-56ad-4ddd-9485-ec0733685d6b req-fbeded93-eea8-4914-9836-050f7e808dd9 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "refresh_cache-1b6b1397-fda7-4470-883b-1cc5974fac84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:36:55 compute-0 nova_compute[188777]: 2026-02-19 20:36:55.144 188781 DEBUG nova.network.neutron [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 19 20:36:55 compute-0 podman[253730]: 2026-02-19 20:36:55.374666828 +0000 UTC m=+0.060552358 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true)
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.699 188781 DEBUG nova.network.neutron [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Updating instance_info_cache with network_info: [{"id": "3b9e0369-31ef-4446-b291-70f0cbddeb63", "address": "fa:16:3e:56:ea:b9", "network": {"id": "03b0387c-cb4d-416d-b212-4d980b66cbe2", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e54c3b3dadc42fca16da4cb7212a2db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b9e0369-31", "ovs_interfaceid": "3b9e0369-31ef-4446-b291-70f0cbddeb63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.740 188781 DEBUG oslo_concurrency.lockutils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Releasing lock "refresh_cache-1b6b1397-fda7-4470-883b-1cc5974fac84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.741 188781 DEBUG nova.compute.manager [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Instance network_info: |[{"id": "3b9e0369-31ef-4446-b291-70f0cbddeb63", "address": "fa:16:3e:56:ea:b9", "network": {"id": "03b0387c-cb4d-416d-b212-4d980b66cbe2", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e54c3b3dadc42fca16da4cb7212a2db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b9e0369-31", "ovs_interfaceid": "3b9e0369-31ef-4446-b291-70f0cbddeb63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.743 188781 DEBUG oslo_concurrency.lockutils [req-66d0d339-56ad-4ddd-9485-ec0733685d6b req-fbeded93-eea8-4914-9836-050f7e808dd9 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquired lock "refresh_cache-1b6b1397-fda7-4470-883b-1cc5974fac84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.749 188781 DEBUG nova.network.neutron [req-66d0d339-56ad-4ddd-9485-ec0733685d6b req-fbeded93-eea8-4914-9836-050f7e808dd9 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Refreshing network info cache for port 3b9e0369-31ef-4446-b291-70f0cbddeb63 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.753 188781 DEBUG nova.virt.libvirt.driver [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Start _get_guest_xml network_info=[{"id": "3b9e0369-31ef-4446-b291-70f0cbddeb63", "address": "fa:16:3e:56:ea:b9", "network": {"id": "03b0387c-cb4d-416d-b212-4d980b66cbe2", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e54c3b3dadc42fca16da4cb7212a2db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b9e0369-31", "ovs_interfaceid": "3b9e0369-31ef-4446-b291-70f0cbddeb63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-19T20:36:39Z,direct_url=<?>,disk_format='qcow2',id=e98a7b34-d7ef-4dcd-b1f3-0a369d480f18,min_disk=0,min_ram=0,name='tempest-scenario-img--770255378',owner='3e54c3b3dadc42fca16da4cb7212a2db',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-19T20:36:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'encryption_secret_uuid': None, 'image_id': 'e98a7b34-d7ef-4dcd-b1f3-0a369d480f18'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.766 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.773 188781 WARNING nova.virt.libvirt.driver [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.779 188781 DEBUG nova.virt.libvirt.host [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.780 188781 DEBUG nova.virt.libvirt.host [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.784 188781 DEBUG nova.virt.libvirt.host [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.785 188781 DEBUG nova.virt.libvirt.host [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.785 188781 DEBUG nova.virt.libvirt.driver [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.786 188781 DEBUG nova.virt.hardware [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-19T20:34:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68c4e072-7c2b-48a1-8e07-0fd69e153270',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-19T20:36:39Z,direct_url=<?>,disk_format='qcow2',id=e98a7b34-d7ef-4dcd-b1f3-0a369d480f18,min_disk=0,min_ram=0,name='tempest-scenario-img--770255378',owner='3e54c3b3dadc42fca16da4cb7212a2db',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-19T20:36:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.786 188781 DEBUG nova.virt.hardware [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.787 188781 DEBUG nova.virt.hardware [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.787 188781 DEBUG nova.virt.hardware [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.787 188781 DEBUG nova.virt.hardware [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.788 188781 DEBUG nova.virt.hardware [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.788 188781 DEBUG nova.virt.hardware [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.789 188781 DEBUG nova.virt.hardware [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.789 188781 DEBUG nova.virt.hardware [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.790 188781 DEBUG nova.virt.hardware [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.790 188781 DEBUG nova.virt.hardware [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.794 188781 DEBUG nova.virt.libvirt.vif [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-19T20:36:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-4749372-asg-gqiuwwiovj7t-inxwtqyxfrgl-i7ynim6swjio',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-4749372-asg-gqiuwwiovj7t-inxwtqyxfrgl-i7ynim6swjio',id=12,image_ref='e98a7b34-d7ef-4dcd-b1f3-0a369d480f18',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='08c5967c-a408-49e3-be73-425b7dd8ee8c'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3e54c3b3dadc42fca16da4cb7212a2db',ramdisk_id='',reservation_id='r-1jws38c4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e98a7b34-d7ef-4dcd-b1f3-0a369d480f18',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-653304289',owner_user_name='tempest-PrometheusGabbiTest-653304289-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-19T20:36:50Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='4495bf20aedd42ff97fdae62ef729522',uuid=1b6b1397-fda7-4470-883b-1cc5974fac84,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3b9e0369-31ef-4446-b291-70f0cbddeb63", "address": "fa:16:3e:56:ea:b9", "network": {"id": "03b0387c-cb4d-416d-b212-4d980b66cbe2", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e54c3b3dadc42fca16da4cb7212a2db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b9e0369-31", "ovs_interfaceid": "3b9e0369-31ef-4446-b291-70f0cbddeb63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.794 188781 DEBUG nova.network.os_vif_util [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Converting VIF {"id": "3b9e0369-31ef-4446-b291-70f0cbddeb63", "address": "fa:16:3e:56:ea:b9", "network": {"id": "03b0387c-cb4d-416d-b212-4d980b66cbe2", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e54c3b3dadc42fca16da4cb7212a2db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b9e0369-31", "ovs_interfaceid": "3b9e0369-31ef-4446-b291-70f0cbddeb63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.795 188781 DEBUG nova.network.os_vif_util [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:ea:b9,bridge_name='br-int',has_traffic_filtering=True,id=3b9e0369-31ef-4446-b291-70f0cbddeb63,network=Network(03b0387c-cb4d-416d-b212-4d980b66cbe2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b9e0369-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.796 188781 DEBUG nova.objects.instance [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lazy-loading 'pci_devices' on Instance uuid 1b6b1397-fda7-4470-883b-1cc5974fac84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.820 188781 DEBUG nova.virt.libvirt.driver [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] End _get_guest_xml xml=<domain type="kvm">
Feb 19 20:36:56 compute-0 nova_compute[188777]:   <uuid>1b6b1397-fda7-4470-883b-1cc5974fac84</uuid>
Feb 19 20:36:56 compute-0 nova_compute[188777]:   <name>instance-0000000c</name>
Feb 19 20:36:56 compute-0 nova_compute[188777]:   <memory>131072</memory>
Feb 19 20:36:56 compute-0 nova_compute[188777]:   <vcpu>1</vcpu>
Feb 19 20:36:56 compute-0 nova_compute[188777]:   <metadata>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 19 20:36:56 compute-0 nova_compute[188777]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:       <nova:name>te-4749372-asg-gqiuwwiovj7t-inxwtqyxfrgl-i7ynim6swjio</nova:name>
Feb 19 20:36:56 compute-0 nova_compute[188777]:       <nova:creationTime>2026-02-19 20:36:56</nova:creationTime>
Feb 19 20:36:56 compute-0 nova_compute[188777]:       <nova:flavor name="m1.nano">
Feb 19 20:36:56 compute-0 nova_compute[188777]:         <nova:memory>128</nova:memory>
Feb 19 20:36:56 compute-0 nova_compute[188777]:         <nova:disk>1</nova:disk>
Feb 19 20:36:56 compute-0 nova_compute[188777]:         <nova:swap>0</nova:swap>
Feb 19 20:36:56 compute-0 nova_compute[188777]:         <nova:ephemeral>0</nova:ephemeral>
Feb 19 20:36:56 compute-0 nova_compute[188777]:         <nova:vcpus>1</nova:vcpus>
Feb 19 20:36:56 compute-0 nova_compute[188777]:       </nova:flavor>
Feb 19 20:36:56 compute-0 nova_compute[188777]:       <nova:owner>
Feb 19 20:36:56 compute-0 nova_compute[188777]:         <nova:user uuid="4495bf20aedd42ff97fdae62ef729522">tempest-PrometheusGabbiTest-653304289-project-member</nova:user>
Feb 19 20:36:56 compute-0 nova_compute[188777]:         <nova:project uuid="3e54c3b3dadc42fca16da4cb7212a2db">tempest-PrometheusGabbiTest-653304289</nova:project>
Feb 19 20:36:56 compute-0 nova_compute[188777]:       </nova:owner>
Feb 19 20:36:56 compute-0 nova_compute[188777]:       <nova:root type="image" uuid="e98a7b34-d7ef-4dcd-b1f3-0a369d480f18"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:       <nova:ports>
Feb 19 20:36:56 compute-0 nova_compute[188777]:         <nova:port uuid="3b9e0369-31ef-4446-b291-70f0cbddeb63">
Feb 19 20:36:56 compute-0 nova_compute[188777]:           <nova:ip type="fixed" address="10.100.1.142" ipVersion="4"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:         </nova:port>
Feb 19 20:36:56 compute-0 nova_compute[188777]:       </nova:ports>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     </nova:instance>
Feb 19 20:36:56 compute-0 nova_compute[188777]:   </metadata>
Feb 19 20:36:56 compute-0 nova_compute[188777]:   <sysinfo type="smbios">
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <system>
Feb 19 20:36:56 compute-0 nova_compute[188777]:       <entry name="manufacturer">RDO</entry>
Feb 19 20:36:56 compute-0 nova_compute[188777]:       <entry name="product">OpenStack Compute</entry>
Feb 19 20:36:56 compute-0 nova_compute[188777]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 19 20:36:56 compute-0 nova_compute[188777]:       <entry name="serial">1b6b1397-fda7-4470-883b-1cc5974fac84</entry>
Feb 19 20:36:56 compute-0 nova_compute[188777]:       <entry name="uuid">1b6b1397-fda7-4470-883b-1cc5974fac84</entry>
Feb 19 20:36:56 compute-0 nova_compute[188777]:       <entry name="family">Virtual Machine</entry>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     </system>
Feb 19 20:36:56 compute-0 nova_compute[188777]:   </sysinfo>
Feb 19 20:36:56 compute-0 nova_compute[188777]:   <os>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <boot dev="hd"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <smbios mode="sysinfo"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:   </os>
Feb 19 20:36:56 compute-0 nova_compute[188777]:   <features>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <acpi/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <apic/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <vmcoreinfo/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:   </features>
Feb 19 20:36:56 compute-0 nova_compute[188777]:   <clock offset="utc">
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <timer name="pit" tickpolicy="delay"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <timer name="hpet" present="no"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:   </clock>
Feb 19 20:36:56 compute-0 nova_compute[188777]:   <cpu mode="host-model" match="exact">
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <topology sockets="1" cores="1" threads="1"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:   </cpu>
Feb 19 20:36:56 compute-0 nova_compute[188777]:   <devices>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <disk type="file" device="disk">
Feb 19 20:36:56 compute-0 nova_compute[188777]:       <driver name="qemu" type="qcow2" cache="none"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:       <source file="/var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:       <target dev="vda" bus="virtio"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     </disk>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <disk type="file" device="cdrom">
Feb 19 20:36:56 compute-0 nova_compute[188777]:       <driver name="qemu" type="raw" cache="none"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:       <source file="/var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk.config"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:       <target dev="sda" bus="sata"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     </disk>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <interface type="ethernet">
Feb 19 20:36:56 compute-0 nova_compute[188777]:       <mac address="fa:16:3e:56:ea:b9"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:       <model type="virtio"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:       <driver name="vhost" rx_queue_size="512"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:       <mtu size="1442"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:       <target dev="tap3b9e0369-31"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     </interface>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <serial type="pty">
Feb 19 20:36:56 compute-0 nova_compute[188777]:       <log file="/var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/console.log" append="off"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     </serial>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <video>
Feb 19 20:36:56 compute-0 nova_compute[188777]:       <model type="virtio"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     </video>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <input type="tablet" bus="usb"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <rng model="virtio">
Feb 19 20:36:56 compute-0 nova_compute[188777]:       <backend model="random">/dev/urandom</backend>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     </rng>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <controller type="usb" index="0"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     <memballoon model="virtio">
Feb 19 20:36:56 compute-0 nova_compute[188777]:       <stats period="10"/>
Feb 19 20:36:56 compute-0 nova_compute[188777]:     </memballoon>
Feb 19 20:36:56 compute-0 nova_compute[188777]:   </devices>
Feb 19 20:36:56 compute-0 nova_compute[188777]: </domain>
Feb 19 20:36:56 compute-0 nova_compute[188777]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.830 188781 DEBUG nova.compute.manager [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Preparing to wait for external event network-vif-plugged-3b9e0369-31ef-4446-b291-70f0cbddeb63 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.831 188781 DEBUG oslo_concurrency.lockutils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Acquiring lock "1b6b1397-fda7-4470-883b-1cc5974fac84-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.831 188781 DEBUG oslo_concurrency.lockutils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lock "1b6b1397-fda7-4470-883b-1cc5974fac84-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.831 188781 DEBUG oslo_concurrency.lockutils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lock "1b6b1397-fda7-4470-883b-1cc5974fac84-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.832 188781 DEBUG nova.virt.libvirt.vif [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-19T20:36:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-4749372-asg-gqiuwwiovj7t-inxwtqyxfrgl-i7ynim6swjio',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-4749372-asg-gqiuwwiovj7t-inxwtqyxfrgl-i7ynim6swjio',id=12,image_ref='e98a7b34-d7ef-4dcd-b1f3-0a369d480f18',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='08c5967c-a408-49e3-be73-425b7dd8ee8c'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3e54c3b3dadc42fca16da4cb7212a2db',ramdisk_id='',reservation_id='r-1jws38c4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e98a7b34-d7ef-4dcd-b1f3-0a369d480f18',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-653304289',owner_user_name='tempest-PrometheusGabbiTest-653304289-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-19T20:36:50Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='4495bf20aedd42ff97fdae62ef729522',uuid=1b6b1397-fda7-4470-883b-1cc5974fac84,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3b9e0369-31ef-4446-b291-70f0cbddeb63", "address": "fa:16:3e:56:ea:b9", "network": {"id": "03b0387c-cb4d-416d-b212-4d980b66cbe2", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e54c3b3dadc42fca16da4cb7212a2db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b9e0369-31", "ovs_interfaceid": "3b9e0369-31ef-4446-b291-70f0cbddeb63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.832 188781 DEBUG nova.network.os_vif_util [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Converting VIF {"id": "3b9e0369-31ef-4446-b291-70f0cbddeb63", "address": "fa:16:3e:56:ea:b9", "network": {"id": "03b0387c-cb4d-416d-b212-4d980b66cbe2", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e54c3b3dadc42fca16da4cb7212a2db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b9e0369-31", "ovs_interfaceid": "3b9e0369-31ef-4446-b291-70f0cbddeb63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.833 188781 DEBUG nova.network.os_vif_util [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:ea:b9,bridge_name='br-int',has_traffic_filtering=True,id=3b9e0369-31ef-4446-b291-70f0cbddeb63,network=Network(03b0387c-cb4d-416d-b212-4d980b66cbe2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b9e0369-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.834 188781 DEBUG os_vif [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:ea:b9,bridge_name='br-int',has_traffic_filtering=True,id=3b9e0369-31ef-4446-b291-70f0cbddeb63,network=Network(03b0387c-cb4d-416d-b212-4d980b66cbe2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b9e0369-31') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.834 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.835 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.836 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.839 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.839 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3b9e0369-31, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.841 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3b9e0369-31, col_values=(('external_ids', {'iface-id': '3b9e0369-31ef-4446-b291-70f0cbddeb63', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:56:ea:b9', 'vm-uuid': '1b6b1397-fda7-4470-883b-1cc5974fac84'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:36:56 compute-0 NetworkManager[57033]: <info>  [1771533416.8447] manager: (tap3b9e0369-31): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.847 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.850 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.850 188781 INFO os_vif [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:ea:b9,bridge_name='br-int',has_traffic_filtering=True,id=3b9e0369-31ef-4446-b291-70f0cbddeb63,network=Network(03b0387c-cb4d-416d-b212-4d980b66cbe2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b9e0369-31')
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.958 188781 DEBUG nova.virt.libvirt.driver [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.959 188781 DEBUG nova.virt.libvirt.driver [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.960 188781 DEBUG nova.virt.libvirt.driver [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] No VIF found with MAC fa:16:3e:56:ea:b9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 19 20:36:56 compute-0 nova_compute[188777]: 2026-02-19 20:36:56.961 188781 INFO nova.virt.libvirt.driver [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Using config drive
Feb 19 20:36:57 compute-0 nova_compute[188777]: 2026-02-19 20:36:57.261 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:36:57 compute-0 nova_compute[188777]: 2026-02-19 20:36:57.270 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:36:57 compute-0 nova_compute[188777]: 2026-02-19 20:36:57.294 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:36:57 compute-0 nova_compute[188777]: 2026-02-19 20:36:57.294 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:36:57 compute-0 nova_compute[188777]: 2026-02-19 20:36:57.295 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:36:57 compute-0 nova_compute[188777]: 2026-02-19 20:36:57.295 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:36:57 compute-0 nova_compute[188777]: 2026-02-19 20:36:57.377 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:36:57 compute-0 nova_compute[188777]: 2026-02-19 20:36:57.424 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:36:57 compute-0 nova_compute[188777]: 2026-02-19 20:36:57.425 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:36:57 compute-0 nova_compute[188777]: 2026-02-19 20:36:57.471 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:36:57 compute-0 nova_compute[188777]: 2026-02-19 20:36:57.478 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/da31f324-38ad-4f77-b724-3ef1628be336/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:36:57 compute-0 nova_compute[188777]: 2026-02-19 20:36:57.551 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/da31f324-38ad-4f77-b724-3ef1628be336/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:36:57 compute-0 nova_compute[188777]: 2026-02-19 20:36:57.552 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/da31f324-38ad-4f77-b724-3ef1628be336/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:36:57 compute-0 nova_compute[188777]: 2026-02-19 20:36:57.598 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/da31f324-38ad-4f77-b724-3ef1628be336/disk --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:36:57 compute-0 nova_compute[188777]: 2026-02-19 20:36:57.607 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:36:57 compute-0 nova_compute[188777]: 2026-02-19 20:36:57.658 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:36:57 compute-0 nova_compute[188777]: 2026-02-19 20:36:57.659 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:36:57 compute-0 nova_compute[188777]: 2026-02-19 20:36:57.736 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:36:57 compute-0 nova_compute[188777]: 2026-02-19 20:36:57.745 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:36:57 compute-0 nova_compute[188777]: 2026-02-19 20:36:57.821 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:36:57 compute-0 nova_compute[188777]: 2026-02-19 20:36:57.823 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:36:57 compute-0 nova_compute[188777]: 2026-02-19 20:36:57.874 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:36:57 compute-0 nova_compute[188777]: 2026-02-19 20:36:57.876 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Periodic task is updating the host stat, it is trying to get disk instance-0000000c, but disk file was removed by concurrent operations such as resize.: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk.config'
Feb 19 20:36:58 compute-0 nova_compute[188777]: 2026-02-19 20:36:58.223 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:36:58 compute-0 nova_compute[188777]: 2026-02-19 20:36:58.225 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4837MB free_disk=72.13968276977539GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:36:58 compute-0 nova_compute[188777]: 2026-02-19 20:36:58.225 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:36:58 compute-0 nova_compute[188777]: 2026-02-19 20:36:58.226 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:36:58 compute-0 nova_compute[188777]: 2026-02-19 20:36:58.291 188781 INFO nova.virt.libvirt.driver [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Creating config drive at /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk.config
Feb 19 20:36:58 compute-0 nova_compute[188777]: 2026-02-19 20:36:58.295 188781 DEBUG oslo_concurrency.processutils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp35j2438g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:36:58 compute-0 nova_compute[188777]: 2026-02-19 20:36:58.344 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance da31f324-38ad-4f77-b724-3ef1628be336 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:36:58 compute-0 nova_compute[188777]: 2026-02-19 20:36:58.345 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 997ebdcf-7eab-485b-8fbf-d21112c78946 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:36:58 compute-0 nova_compute[188777]: 2026-02-19 20:36:58.345 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance dff9d513-54f8-4d73-acf7-df610dc4d064 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:36:58 compute-0 nova_compute[188777]: 2026-02-19 20:36:58.346 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 1b6b1397-fda7-4470-883b-1cc5974fac84 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:36:58 compute-0 nova_compute[188777]: 2026-02-19 20:36:58.346 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:36:58 compute-0 nova_compute[188777]: 2026-02-19 20:36:58.347 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:36:58 compute-0 nova_compute[188777]: 2026-02-19 20:36:58.415 188781 DEBUG oslo_concurrency.processutils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp35j2438g" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:36:58 compute-0 kernel: tap3b9e0369-31: entered promiscuous mode
Feb 19 20:36:58 compute-0 NetworkManager[57033]: <info>  [1771533418.4954] manager: (tap3b9e0369-31): new Tun device (/org/freedesktop/NetworkManager/Devices/60)
Feb 19 20:36:58 compute-0 ovn_controller[98843]: 2026-02-19T20:36:58Z|00117|binding|INFO|Claiming lport 3b9e0369-31ef-4446-b291-70f0cbddeb63 for this chassis.
Feb 19 20:36:58 compute-0 ovn_controller[98843]: 2026-02-19T20:36:58Z|00118|binding|INFO|3b9e0369-31ef-4446-b291-70f0cbddeb63: Claiming fa:16:3e:56:ea:b9 10.100.1.142
Feb 19 20:36:58 compute-0 nova_compute[188777]: 2026-02-19 20:36:58.503 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:58 compute-0 nova_compute[188777]: 2026-02-19 20:36:58.520 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:36:58 compute-0 systemd-udevd[253790]: Network interface NamePolicy= disabled on kernel command line.
Feb 19 20:36:58 compute-0 ovn_controller[98843]: 2026-02-19T20:36:58Z|00119|binding|INFO|Setting lport 3b9e0369-31ef-4446-b291-70f0cbddeb63 ovn-installed in OVS
Feb 19 20:36:58 compute-0 nova_compute[188777]: 2026-02-19 20:36:58.526 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:58 compute-0 systemd-machined[158158]: New machine qemu-12-instance-0000000c.
Feb 19 20:36:58 compute-0 NetworkManager[57033]: <info>  [1771533418.5365] device (tap3b9e0369-31): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 19 20:36:58 compute-0 NetworkManager[57033]: <info>  [1771533418.5403] device (tap3b9e0369-31): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 19 20:36:58 compute-0 nova_compute[188777]: 2026-02-19 20:36:58.539 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:36:58 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-0000000c.
Feb 19 20:36:58 compute-0 nova_compute[188777]: 2026-02-19 20:36:58.558 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:36:58 compute-0 nova_compute[188777]: 2026-02-19 20:36:58.559 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.332s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:36:58 compute-0 ovn_controller[98843]: 2026-02-19T20:36:58Z|00120|binding|INFO|Setting lport 3b9e0369-31ef-4446-b291-70f0cbddeb63 up in Southbound
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:58.594 108175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:ea:b9 10.100.1.142'], port_security=['fa:16:3e:56:ea:b9 10.100.1.142'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.142/16', 'neutron:device_id': '1b6b1397-fda7-4470-883b-1cc5974fac84', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03b0387c-cb4d-416d-b212-4d980b66cbe2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3e54c3b3dadc42fca16da4cb7212a2db', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c84042e2-5094-46cb-8818-ed6fb8d69afe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2e658df7-7d87-44f0-8690-f7f2e1d7b0ae, chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>], logical_port=3b9e0369-31ef-4446-b291-70f0cbddeb63) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:58.596 108175 INFO neutron.agent.ovn.metadata.agent [-] Port 3b9e0369-31ef-4446-b291-70f0cbddeb63 in datapath 03b0387c-cb4d-416d-b212-4d980b66cbe2 bound to our chassis
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:58.598 108175 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 03b0387c-cb4d-416d-b212-4d980b66cbe2
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:58.606 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[e71fa978-6863-41ac-b760-a1b68ba17156]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:58.608 108175 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap03b0387c-c1 in ovnmeta-03b0387c-cb4d-416d-b212-4d980b66cbe2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:58.610 242160 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap03b0387c-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:58.610 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[8560f831-57e6-4fa1-805d-895543888a0c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:58.612 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[bd820f7c-b5a9-4230-a5bb-2b332c4a4fbf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:58.621 108698 DEBUG oslo.privsep.daemon [-] privsep: reply[ec0f5318-768d-4dcc-9976-610f7efdc741]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:58.645 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[9bfe87be-3e36-48e9-8005-06c661fc2b25]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:58.669 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[3aaaafa3-46b7-46a0-ac24-24cc7f9692b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:58 compute-0 systemd-udevd[253793]: Network interface NamePolicy= disabled on kernel command line.
Feb 19 20:36:58 compute-0 NetworkManager[57033]: <info>  [1771533418.6779] manager: (tap03b0387c-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/61)
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:58.678 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[3d924b4a-f2c3-475a-9268-0d356dfafb45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:58.714 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[bcc9805b-2097-494e-85d1-26b21da3d344]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:58.718 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[c441f587-15c8-4bb8-ba32-51cbfbc9943c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:58 compute-0 NetworkManager[57033]: <info>  [1771533418.7366] device (tap03b0387c-c0): carrier: link connected
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:58.739 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[5be467d9-962d-4c17-99a3-bf2a5df7bb2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:58.752 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[65a55e20-c455-498c-86e8-8f44e6528e7d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap03b0387c-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:70:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493020, 'reachable_time': 42704, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253830, 'error': None, 'target': 'ovnmeta-03b0387c-cb4d-416d-b212-4d980b66cbe2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:58 compute-0 nova_compute[188777]: 2026-02-19 20:36:58.760 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:58.769 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[0a53c937-8a37-425e-9387-613129f5bfb8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefa:70b2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493020, 'tstamp': 493020}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253831, 'error': None, 'target': 'ovnmeta-03b0387c-cb4d-416d-b212-4d980b66cbe2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:58.782 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[32fb254b-59f7-4627-9af1-a69561a0d459]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap03b0387c-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:70:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493020, 'reachable_time': 42704, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253832, 'error': None, 'target': 'ovnmeta-03b0387c-cb4d-416d-b212-4d980b66cbe2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:58.798 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[2214de89-9848-4da3-bc21-1deefbef896d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:58 compute-0 nova_compute[188777]: 2026-02-19 20:36:58.830 188781 DEBUG nova.virt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Emitting event <LifecycleEvent: 1771533418.8297365, 1b6b1397-fda7-4470-883b-1cc5974fac84 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:36:58 compute-0 nova_compute[188777]: 2026-02-19 20:36:58.830 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] VM Started (Lifecycle Event)
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:58.843 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[5b4cc36b-7b4c-467f-b7db-1644086b15fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:58.845 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03b0387c-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:58.845 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:58.845 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap03b0387c-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:36:58 compute-0 kernel: tap03b0387c-c0: entered promiscuous mode
Feb 19 20:36:58 compute-0 NetworkManager[57033]: <info>  [1771533418.8504] manager: (tap03b0387c-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Feb 19 20:36:58 compute-0 nova_compute[188777]: 2026-02-19 20:36:58.847 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:58.854 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap03b0387c-c0, col_values=(('external_ids', {'iface-id': 'ac510fcf-4783-4f81-b107-f5dac80c5fad'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:36:58 compute-0 nova_compute[188777]: 2026-02-19 20:36:58.855 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:58 compute-0 ovn_controller[98843]: 2026-02-19T20:36:58Z|00121|binding|INFO|Releasing lport ac510fcf-4783-4f81-b107-f5dac80c5fad from this chassis (sb_readonly=0)
Feb 19 20:36:58 compute-0 nova_compute[188777]: 2026-02-19 20:36:58.859 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:36:58 compute-0 nova_compute[188777]: 2026-02-19 20:36:58.863 188781 DEBUG nova.virt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Emitting event <LifecycleEvent: 1771533418.8298557, 1b6b1397-fda7-4470-883b-1cc5974fac84 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:36:58 compute-0 nova_compute[188777]: 2026-02-19 20:36:58.863 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] VM Paused (Lifecycle Event)
Feb 19 20:36:58 compute-0 nova_compute[188777]: 2026-02-19 20:36:58.880 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:58.881 108175 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/03b0387c-cb4d-416d-b212-4d980b66cbe2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/03b0387c-cb4d-416d-b212-4d980b66cbe2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:58.882 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[e15a9af0-1bfc-457a-b8f7-6924fd7b2364]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:58.883 108175 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]: global
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]:     log         /dev/log local0 debug
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]:     log-tag     haproxy-metadata-proxy-03b0387c-cb4d-416d-b212-4d980b66cbe2
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]:     user        root
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]:     group       root
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]:     maxconn     1024
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]:     pidfile     /var/lib/neutron/external/pids/03b0387c-cb4d-416d-b212-4d980b66cbe2.pid.haproxy
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]:     daemon
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]: 
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]: defaults
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]:     log global
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]:     mode http
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]:     option httplog
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]:     option dontlognull
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]:     option http-server-close
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]:     option forwardfor
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]:     retries                 3
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]:     timeout http-request    30s
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]:     timeout connect         30s
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]:     timeout client          32s
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]:     timeout server          32s
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]:     timeout http-keep-alive 30s
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]: 
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]: 
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]: listen listener
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]:     bind 169.254.169.254:80
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]:     server metadata /var/lib/neutron/metadata_proxy
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]:     http-request add-header X-OVN-Network-ID 03b0387c-cb4d-416d-b212-4d980b66cbe2
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 19 20:36:58 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:36:58.884 108175 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-03b0387c-cb4d-416d-b212-4d980b66cbe2', 'env', 'PROCESS_TAG=haproxy-03b0387c-cb4d-416d-b212-4d980b66cbe2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/03b0387c-cb4d-416d-b212-4d980b66cbe2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 19 20:36:58 compute-0 nova_compute[188777]: 2026-02-19 20:36:58.908 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:36:58 compute-0 nova_compute[188777]: 2026-02-19 20:36:58.915 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 19 20:36:58 compute-0 nova_compute[188777]: 2026-02-19 20:36:58.945 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 19 20:36:59 compute-0 nova_compute[188777]: 2026-02-19 20:36:59.109 188781 DEBUG nova.compute.manager [req-225d7e77-15fe-49e7-96c0-fbdcb6e11117 req-decf4601-0d86-473b-86c9-5c3ed289af5e 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Received event network-vif-plugged-3b9e0369-31ef-4446-b291-70f0cbddeb63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:36:59 compute-0 nova_compute[188777]: 2026-02-19 20:36:59.109 188781 DEBUG oslo_concurrency.lockutils [req-225d7e77-15fe-49e7-96c0-fbdcb6e11117 req-decf4601-0d86-473b-86c9-5c3ed289af5e 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "1b6b1397-fda7-4470-883b-1cc5974fac84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:36:59 compute-0 nova_compute[188777]: 2026-02-19 20:36:59.109 188781 DEBUG oslo_concurrency.lockutils [req-225d7e77-15fe-49e7-96c0-fbdcb6e11117 req-decf4601-0d86-473b-86c9-5c3ed289af5e 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "1b6b1397-fda7-4470-883b-1cc5974fac84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:36:59 compute-0 nova_compute[188777]: 2026-02-19 20:36:59.110 188781 DEBUG oslo_concurrency.lockutils [req-225d7e77-15fe-49e7-96c0-fbdcb6e11117 req-decf4601-0d86-473b-86c9-5c3ed289af5e 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "1b6b1397-fda7-4470-883b-1cc5974fac84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:36:59 compute-0 nova_compute[188777]: 2026-02-19 20:36:59.110 188781 DEBUG nova.compute.manager [req-225d7e77-15fe-49e7-96c0-fbdcb6e11117 req-decf4601-0d86-473b-86c9-5c3ed289af5e 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Processing event network-vif-plugged-3b9e0369-31ef-4446-b291-70f0cbddeb63 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 19 20:36:59 compute-0 nova_compute[188777]: 2026-02-19 20:36:59.110 188781 DEBUG nova.compute.manager [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 19 20:36:59 compute-0 nova_compute[188777]: 2026-02-19 20:36:59.114 188781 DEBUG nova.virt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Emitting event <LifecycleEvent: 1771533419.1144602, 1b6b1397-fda7-4470-883b-1cc5974fac84 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:36:59 compute-0 nova_compute[188777]: 2026-02-19 20:36:59.114 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] VM Resumed (Lifecycle Event)
Feb 19 20:36:59 compute-0 nova_compute[188777]: 2026-02-19 20:36:59.116 188781 DEBUG nova.virt.libvirt.driver [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 19 20:36:59 compute-0 nova_compute[188777]: 2026-02-19 20:36:59.120 188781 INFO nova.virt.libvirt.driver [-] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Instance spawned successfully.
Feb 19 20:36:59 compute-0 nova_compute[188777]: 2026-02-19 20:36:59.120 188781 DEBUG nova.virt.libvirt.driver [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 19 20:36:59 compute-0 nova_compute[188777]: 2026-02-19 20:36:59.144 188781 DEBUG nova.virt.libvirt.driver [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:36:59 compute-0 nova_compute[188777]: 2026-02-19 20:36:59.146 188781 DEBUG nova.virt.libvirt.driver [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:36:59 compute-0 nova_compute[188777]: 2026-02-19 20:36:59.147 188781 DEBUG nova.virt.libvirt.driver [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:36:59 compute-0 nova_compute[188777]: 2026-02-19 20:36:59.147 188781 DEBUG nova.virt.libvirt.driver [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:36:59 compute-0 nova_compute[188777]: 2026-02-19 20:36:59.148 188781 DEBUG nova.virt.libvirt.driver [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:36:59 compute-0 nova_compute[188777]: 2026-02-19 20:36:59.148 188781 DEBUG nova.virt.libvirt.driver [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:36:59 compute-0 nova_compute[188777]: 2026-02-19 20:36:59.153 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:36:59 compute-0 nova_compute[188777]: 2026-02-19 20:36:59.157 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 19 20:36:59 compute-0 nova_compute[188777]: 2026-02-19 20:36:59.239 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 19 20:36:59 compute-0 podman[253864]: 2026-02-19 20:36:59.266939618 +0000 UTC m=+0.074525244 container create c053a1325ad2e4193d5a2754f73a1af14d09ac02a19a79e8dc8fdabad7c22855 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-03b0387c-cb4d-416d-b212-4d980b66cbe2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Feb 19 20:36:59 compute-0 nova_compute[188777]: 2026-02-19 20:36:59.294 188781 INFO nova.compute.manager [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Took 9.03 seconds to spawn the instance on the hypervisor.
Feb 19 20:36:59 compute-0 nova_compute[188777]: 2026-02-19 20:36:59.294 188781 DEBUG nova.compute.manager [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:36:59 compute-0 systemd[1]: Started libpod-conmon-c053a1325ad2e4193d5a2754f73a1af14d09ac02a19a79e8dc8fdabad7c22855.scope.
Feb 19 20:36:59 compute-0 podman[253864]: 2026-02-19 20:36:59.222328058 +0000 UTC m=+0.029913714 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 19 20:36:59 compute-0 systemd[1]: Started libcrun container.
Feb 19 20:36:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cff5322526fa26e2f34632acf71ebbd870f506531aaceb13c1e168509e7174d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 19 20:36:59 compute-0 nova_compute[188777]: 2026-02-19 20:36:59.371 188781 INFO nova.compute.manager [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Took 9.54 seconds to build instance.
Feb 19 20:36:59 compute-0 podman[253864]: 2026-02-19 20:36:59.375763151 +0000 UTC m=+0.183348797 container init c053a1325ad2e4193d5a2754f73a1af14d09ac02a19a79e8dc8fdabad7c22855 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-03b0387c-cb4d-416d-b212-4d980b66cbe2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 19 20:36:59 compute-0 podman[253864]: 2026-02-19 20:36:59.381953344 +0000 UTC m=+0.189538980 container start c053a1325ad2e4193d5a2754f73a1af14d09ac02a19a79e8dc8fdabad7c22855 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-03b0387c-cb4d-416d-b212-4d980b66cbe2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true)
Feb 19 20:36:59 compute-0 neutron-haproxy-ovnmeta-03b0387c-cb4d-416d-b212-4d980b66cbe2[253879]: [NOTICE]   (253883) : New worker (253885) forked
Feb 19 20:36:59 compute-0 neutron-haproxy-ovnmeta-03b0387c-cb4d-416d-b212-4d980b66cbe2[253879]: [NOTICE]   (253883) : Loading success.
Feb 19 20:36:59 compute-0 nova_compute[188777]: 2026-02-19 20:36:59.410 188781 DEBUG oslo_concurrency.lockutils [None req-60f16d7e-f946-44f2-b35b-25a6d9a8598f 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lock "1b6b1397-fda7-4470-883b-1cc5974fac84" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:36:59 compute-0 nova_compute[188777]: 2026-02-19 20:36:59.552 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:36:59 compute-0 nova_compute[188777]: 2026-02-19 20:36:59.554 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:36:59 compute-0 nova_compute[188777]: 2026-02-19 20:36:59.555 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 19 20:36:59 compute-0 podman[204724]: time="2026-02-19T20:36:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:36:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:36:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32938 "" "Go-http-client/1.1"
Feb 19 20:36:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:36:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5755 "" "Go-http-client/1.1"
Feb 19 20:37:00 compute-0 nova_compute[188777]: 2026-02-19 20:37:00.036 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "refresh_cache-da31f324-38ad-4f77-b724-3ef1628be336" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:37:00 compute-0 nova_compute[188777]: 2026-02-19 20:37:00.036 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquired lock "refresh_cache-da31f324-38ad-4f77-b724-3ef1628be336" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:37:00 compute-0 nova_compute[188777]: 2026-02-19 20:37:00.036 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 19 20:37:00 compute-0 nova_compute[188777]: 2026-02-19 20:37:00.036 188781 DEBUG nova.objects.instance [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid da31f324-38ad-4f77-b724-3ef1628be336 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:37:00 compute-0 podman[253896]: 2026-02-19 20:37:00.403733535 +0000 UTC m=+0.088354076 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi)
Feb 19 20:37:00 compute-0 podman[253895]: 2026-02-19 20:37:00.415834341 +0000 UTC m=+0.094958871 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.29.0, release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, build-date=2024-09-18T21:23:30, container_name=kepler, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=kepler, name=ubi9, architecture=x86_64)
Feb 19 20:37:00 compute-0 nova_compute[188777]: 2026-02-19 20:37:00.714 188781 DEBUG nova.network.neutron [req-66d0d339-56ad-4ddd-9485-ec0733685d6b req-fbeded93-eea8-4914-9836-050f7e808dd9 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Updated VIF entry in instance network info cache for port 3b9e0369-31ef-4446-b291-70f0cbddeb63. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 19 20:37:00 compute-0 nova_compute[188777]: 2026-02-19 20:37:00.714 188781 DEBUG nova.network.neutron [req-66d0d339-56ad-4ddd-9485-ec0733685d6b req-fbeded93-eea8-4914-9836-050f7e808dd9 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Updating instance_info_cache with network_info: [{"id": "3b9e0369-31ef-4446-b291-70f0cbddeb63", "address": "fa:16:3e:56:ea:b9", "network": {"id": "03b0387c-cb4d-416d-b212-4d980b66cbe2", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e54c3b3dadc42fca16da4cb7212a2db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b9e0369-31", "ovs_interfaceid": "3b9e0369-31ef-4446-b291-70f0cbddeb63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:37:00 compute-0 nova_compute[188777]: 2026-02-19 20:37:00.743 188781 DEBUG oslo_concurrency.lockutils [req-66d0d339-56ad-4ddd-9485-ec0733685d6b req-fbeded93-eea8-4914-9836-050f7e808dd9 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Releasing lock "refresh_cache-1b6b1397-fda7-4470-883b-1cc5974fac84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:37:01 compute-0 openstack_network_exporter[207898]: ERROR   20:37:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:37:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:37:01 compute-0 openstack_network_exporter[207898]: ERROR   20:37:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:37:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:37:01 compute-0 nova_compute[188777]: 2026-02-19 20:37:01.466 188781 DEBUG nova.compute.manager [req-011777e9-5dcc-4dd3-b4ab-2a8b0f5014de req-b3c6c5d5-5e05-4263-8e39-3d16995b32a8 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Received event network-vif-plugged-3b9e0369-31ef-4446-b291-70f0cbddeb63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:37:01 compute-0 nova_compute[188777]: 2026-02-19 20:37:01.466 188781 DEBUG oslo_concurrency.lockutils [req-011777e9-5dcc-4dd3-b4ab-2a8b0f5014de req-b3c6c5d5-5e05-4263-8e39-3d16995b32a8 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "1b6b1397-fda7-4470-883b-1cc5974fac84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:37:01 compute-0 nova_compute[188777]: 2026-02-19 20:37:01.466 188781 DEBUG oslo_concurrency.lockutils [req-011777e9-5dcc-4dd3-b4ab-2a8b0f5014de req-b3c6c5d5-5e05-4263-8e39-3d16995b32a8 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "1b6b1397-fda7-4470-883b-1cc5974fac84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:37:01 compute-0 nova_compute[188777]: 2026-02-19 20:37:01.467 188781 DEBUG oslo_concurrency.lockutils [req-011777e9-5dcc-4dd3-b4ab-2a8b0f5014de req-b3c6c5d5-5e05-4263-8e39-3d16995b32a8 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "1b6b1397-fda7-4470-883b-1cc5974fac84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:37:01 compute-0 nova_compute[188777]: 2026-02-19 20:37:01.467 188781 DEBUG nova.compute.manager [req-011777e9-5dcc-4dd3-b4ab-2a8b0f5014de req-b3c6c5d5-5e05-4263-8e39-3d16995b32a8 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] No waiting events found dispatching network-vif-plugged-3b9e0369-31ef-4446-b291-70f0cbddeb63 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 19 20:37:01 compute-0 nova_compute[188777]: 2026-02-19 20:37:01.467 188781 WARNING nova.compute.manager [req-011777e9-5dcc-4dd3-b4ab-2a8b0f5014de req-b3c6c5d5-5e05-4263-8e39-3d16995b32a8 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Received unexpected event network-vif-plugged-3b9e0369-31ef-4446-b291-70f0cbddeb63 for instance with vm_state active and task_state None.
Feb 19 20:37:01 compute-0 nova_compute[188777]: 2026-02-19 20:37:01.844 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:37:03 compute-0 nova_compute[188777]: 2026-02-19 20:37:03.480 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Updating instance_info_cache with network_info: [{"id": "b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2", "address": "fa:16:3e:c6:08:9f", "network": {"id": "d02e853c-7c37-4c12-a959-0da0ff097734", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-432434488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c8b3e035bb347acad9c4027457ee296", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9a6ef82-e3", "ovs_interfaceid": "b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:37:03 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:03.497 108175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1e:ad:15', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:0d:ba:1d:25:53'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 19 20:37:03 compute-0 nova_compute[188777]: 2026-02-19 20:37:03.498 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:37:03 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:03.499 108175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 19 20:37:03 compute-0 nova_compute[188777]: 2026-02-19 20:37:03.510 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Releasing lock "refresh_cache-da31f324-38ad-4f77-b724-3ef1628be336" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:37:03 compute-0 nova_compute[188777]: 2026-02-19 20:37:03.511 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 19 20:37:03 compute-0 nova_compute[188777]: 2026-02-19 20:37:03.511 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:37:03 compute-0 nova_compute[188777]: 2026-02-19 20:37:03.512 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:37:03 compute-0 nova_compute[188777]: 2026-02-19 20:37:03.512 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:37:03 compute-0 nova_compute[188777]: 2026-02-19 20:37:03.762 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:37:04 compute-0 nova_compute[188777]: 2026-02-19 20:37:04.220 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:37:04 compute-0 podman[253954]: 2026-02-19 20:37:04.396654732 +0000 UTC m=+0.081143511 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 19 20:37:05 compute-0 ovn_controller[98843]: 2026-02-19T20:37:05Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f7:60:ee 10.100.0.3
Feb 19 20:37:05 compute-0 ovn_controller[98843]: 2026-02-19T20:37:05Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f7:60:ee 10.100.0.3
Feb 19 20:37:05 compute-0 nova_compute[188777]: 2026-02-19 20:37:05.531 188781 DEBUG oslo_concurrency.lockutils [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Acquiring lock "da31f324-38ad-4f77-b724-3ef1628be336" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:37:05 compute-0 nova_compute[188777]: 2026-02-19 20:37:05.532 188781 DEBUG oslo_concurrency.lockutils [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Lock "da31f324-38ad-4f77-b724-3ef1628be336" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:37:05 compute-0 nova_compute[188777]: 2026-02-19 20:37:05.532 188781 INFO nova.compute.manager [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Rebooting instance
Feb 19 20:37:05 compute-0 nova_compute[188777]: 2026-02-19 20:37:05.552 188781 DEBUG oslo_concurrency.lockutils [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Acquiring lock "refresh_cache-da31f324-38ad-4f77-b724-3ef1628be336" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:37:05 compute-0 nova_compute[188777]: 2026-02-19 20:37:05.553 188781 DEBUG oslo_concurrency.lockutils [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Acquired lock "refresh_cache-da31f324-38ad-4f77-b724-3ef1628be336" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:37:05 compute-0 nova_compute[188777]: 2026-02-19 20:37:05.554 188781 DEBUG nova.network.neutron [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 19 20:37:06 compute-0 podman[253999]: 2026-02-19 20:37:06.373016219 +0000 UTC m=+0.065073750 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, config_id=ceilometer_agent_compute)
Feb 19 20:37:06 compute-0 nova_compute[188777]: 2026-02-19 20:37:06.846 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:37:07 compute-0 ovn_controller[98843]: 2026-02-19T20:37:07Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c2:a8:ee 10.100.0.4
Feb 19 20:37:07 compute-0 ovn_controller[98843]: 2026-02-19T20:37:07Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c2:a8:ee 10.100.0.4
Feb 19 20:37:08 compute-0 podman[254019]: 2026-02-19 20:37:08.399379355 +0000 UTC m=+0.086406215 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Feb 19 20:37:08 compute-0 nova_compute[188777]: 2026-02-19 20:37:08.526 188781 DEBUG nova.network.neutron [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Updating instance_info_cache with network_info: [{"id": "b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2", "address": "fa:16:3e:c6:08:9f", "network": {"id": "d02e853c-7c37-4c12-a959-0da0ff097734", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-432434488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c8b3e035bb347acad9c4027457ee296", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9a6ef82-e3", "ovs_interfaceid": "b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:37:08 compute-0 nova_compute[188777]: 2026-02-19 20:37:08.591 188781 DEBUG oslo_concurrency.lockutils [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Releasing lock "refresh_cache-da31f324-38ad-4f77-b724-3ef1628be336" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:37:08 compute-0 nova_compute[188777]: 2026-02-19 20:37:08.593 188781 DEBUG nova.compute.manager [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:37:08 compute-0 nova_compute[188777]: 2026-02-19 20:37:08.768 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:37:08 compute-0 kernel: tapb9a6ef82-e3 (unregistering): left promiscuous mode
Feb 19 20:37:08 compute-0 NetworkManager[57033]: <info>  [1771533428.8723] device (tapb9a6ef82-e3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 19 20:37:08 compute-0 ovn_controller[98843]: 2026-02-19T20:37:08Z|00122|binding|INFO|Releasing lport b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 from this chassis (sb_readonly=0)
Feb 19 20:37:08 compute-0 ovn_controller[98843]: 2026-02-19T20:37:08Z|00123|binding|INFO|Setting lport b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 down in Southbound
Feb 19 20:37:08 compute-0 ovn_controller[98843]: 2026-02-19T20:37:08Z|00124|binding|INFO|Removing iface tapb9a6ef82-e3 ovn-installed in OVS
Feb 19 20:37:08 compute-0 nova_compute[188777]: 2026-02-19 20:37:08.888 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:37:08 compute-0 nova_compute[188777]: 2026-02-19 20:37:08.902 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:37:08 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000007.scope: Deactivated successfully.
Feb 19 20:37:08 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000007.scope: Consumed 41.316s CPU time.
Feb 19 20:37:08 compute-0 systemd-machined[158158]: Machine qemu-8-instance-00000007 terminated.
Feb 19 20:37:09 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:09.006 108175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:08:9f 10.100.0.13'], port_security=['fa:16:3e:c6:08:9f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'da31f324-38ad-4f77-b724-3ef1628be336', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d02e853c-7c37-4c12-a959-0da0ff097734', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3c8b3e035bb347acad9c4027457ee296', 'neutron:revision_number': '4', 'neutron:security_group_ids': '745eb45a-1fad-4b86-be2d-ed9c647c807b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.241'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5262953e-25bb-44de-850c-ced354d0d447, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>], logical_port=b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 19 20:37:09 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:09.011 108175 INFO neutron.agent.ovn.metadata.agent [-] Port b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 in datapath d02e853c-7c37-4c12-a959-0da0ff097734 unbound from our chassis
Feb 19 20:37:09 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:09.016 108175 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d02e853c-7c37-4c12-a959-0da0ff097734, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 19 20:37:09 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:09.018 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[b6a6c5f8-632d-45cd-84dd-f36de7016231]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:37:09 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:09.020 108175 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734 namespace which is not needed anymore
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.100 188781 INFO nova.virt.libvirt.driver [-] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Instance destroyed successfully.
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.101 188781 DEBUG nova.objects.instance [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Lazy-loading 'resources' on Instance uuid da31f324-38ad-4f77-b724-3ef1628be336 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.113 188781 DEBUG nova.virt.libvirt.vif [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-19T20:35:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-541687296',display_name='tempest-ServerActionsTestJSON-server-541687296',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-541687296',id=7,image_ref='17b9bce8-a91b-495d-ac33-cf63893413f9',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBGUXfqYgLpnaK0US8EwHfzCPv+m8vpQ+fPWU8q/hF6l9cNu9x6P14aljSv28A+SD7n7yEsgSHzHQXS8tsguQqzUZEu4v3AxpVAXh2tIOAWxaA3uNPd6KcWlT+WQySBOhg==',key_name='tempest-keypair-175997513',keypairs=<?>,launch_index=0,launched_at=2026-02-19T20:35:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3c8b3e035bb347acad9c4027457ee296',ramdisk_id='',reservation_id='r-a1krnbi0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='17b9bce8-a91b-495d-ac33-cf63893413f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1818290169',owner_user_name='tempest-ServerActionsTestJSON-1818290169-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-19T20:37:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='43931603bc9f40eab8e548129d4c50cb',uuid=da31f324-38ad-4f77-b724-3ef1628be336,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2", "address": "fa:16:3e:c6:08:9f", "network": {"id": "d02e853c-7c37-4c12-a959-0da0ff097734", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-432434488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c8b3e035bb347acad9c4027457ee296", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9a6ef82-e3", "ovs_interfaceid": "b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.114 188781 DEBUG nova.network.os_vif_util [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Converting VIF {"id": "b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2", "address": "fa:16:3e:c6:08:9f", "network": {"id": "d02e853c-7c37-4c12-a959-0da0ff097734", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-432434488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c8b3e035bb347acad9c4027457ee296", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9a6ef82-e3", "ovs_interfaceid": "b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.115 188781 DEBUG nova.network.os_vif_util [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c6:08:9f,bridge_name='br-int',has_traffic_filtering=True,id=b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2,network=Network(d02e853c-7c37-4c12-a959-0da0ff097734),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9a6ef82-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.115 188781 DEBUG os_vif [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c6:08:9f,bridge_name='br-int',has_traffic_filtering=True,id=b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2,network=Network(d02e853c-7c37-4c12-a959-0da0ff097734),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9a6ef82-e3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.117 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.118 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb9a6ef82-e3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.119 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.123 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.125 188781 INFO os_vif [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c6:08:9f,bridge_name='br-int',has_traffic_filtering=True,id=b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2,network=Network(d02e853c-7c37-4c12-a959-0da0ff097734),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9a6ef82-e3')
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.135 188781 DEBUG nova.virt.libvirt.driver [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Start _get_guest_xml network_info=[{"id": "b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2", "address": "fa:16:3e:c6:08:9f", "network": {"id": "d02e853c-7c37-4c12-a959-0da0ff097734", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-432434488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c8b3e035bb347acad9c4027457ee296", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9a6ef82-e3", "ovs_interfaceid": "b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=17b9bce8-a91b-495d-ac33-cf63893413f9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'encryption_secret_uuid': None, 'image_id': '17b9bce8-a91b-495d-ac33-cf63893413f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.142 188781 WARNING nova.virt.libvirt.driver [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.194 188781 DEBUG nova.virt.libvirt.host [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.195 188781 DEBUG nova.virt.libvirt.host [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.200 188781 DEBUG nova.virt.libvirt.host [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.201 188781 DEBUG nova.virt.libvirt.host [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.201 188781 DEBUG nova.virt.libvirt.driver [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.202 188781 DEBUG nova.virt.hardware [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-19T20:34:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68c4e072-7c2b-48a1-8e07-0fd69e153270',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=17b9bce8-a91b-495d-ac33-cf63893413f9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.202 188781 DEBUG nova.virt.hardware [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.202 188781 DEBUG nova.virt.hardware [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.203 188781 DEBUG nova.virt.hardware [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.203 188781 DEBUG nova.virt.hardware [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.203 188781 DEBUG nova.virt.hardware [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.204 188781 DEBUG nova.virt.hardware [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.204 188781 DEBUG nova.virt.hardware [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.205 188781 DEBUG nova.virt.hardware [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.205 188781 DEBUG nova.virt.hardware [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.205 188781 DEBUG nova.virt.hardware [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.206 188781 DEBUG nova.objects.instance [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Lazy-loading 'vcpu_model' on Instance uuid da31f324-38ad-4f77-b724-3ef1628be336 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:37:09 compute-0 neutron-haproxy-ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734[252645]: [NOTICE]   (252649) : haproxy version is 2.8.14-c23fe91
Feb 19 20:37:09 compute-0 neutron-haproxy-ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734[252645]: [NOTICE]   (252649) : path to executable is /usr/sbin/haproxy
Feb 19 20:37:09 compute-0 neutron-haproxy-ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734[252645]: [WARNING]  (252649) : Exiting Master process...
Feb 19 20:37:09 compute-0 neutron-haproxy-ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734[252645]: [WARNING]  (252649) : Exiting Master process...
Feb 19 20:37:09 compute-0 neutron-haproxy-ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734[252645]: [ALERT]    (252649) : Current worker (252651) exited with code 143 (Terminated)
Feb 19 20:37:09 compute-0 neutron-haproxy-ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734[252645]: [WARNING]  (252649) : All workers exited. Exiting... (0)
Feb 19 20:37:09 compute-0 ovn_controller[98843]: 2026-02-19T20:37:09Z|00125|binding|INFO|Releasing lport ac510fcf-4783-4f81-b107-f5dac80c5fad from this chassis (sb_readonly=0)
Feb 19 20:37:09 compute-0 ovn_controller[98843]: 2026-02-19T20:37:09Z|00126|binding|INFO|Releasing lport c4f25fb9-c5df-4323-a436-ca67d28f2bc3 from this chassis (sb_readonly=0)
Feb 19 20:37:09 compute-0 ovn_controller[98843]: 2026-02-19T20:37:09Z|00127|binding|INFO|Releasing lport 55b38ec7-c28d-4985-87ac-ac8d24f4e97c from this chassis (sb_readonly=0)
Feb 19 20:37:09 compute-0 ovn_controller[98843]: 2026-02-19T20:37:09Z|00128|binding|INFO|Releasing lport a514a3b0-3622-43cb-93f5-1ce2f2eacb84 from this chassis (sb_readonly=0)
Feb 19 20:37:09 compute-0 systemd[1]: libpod-9eb6c2b6e3c5a5dbee6f9b6e5df3d9dec9d8eeeb8ff2063dfc30db9f43503bf5.scope: Deactivated successfully.
Feb 19 20:37:09 compute-0 podman[254087]: 2026-02-19 20:37:09.246377517 +0000 UTC m=+0.074372799 container died 9eb6c2b6e3c5a5dbee6f9b6e5df3d9dec9d8eeeb8ff2063dfc30db9f43503bf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.267 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:37:09 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9eb6c2b6e3c5a5dbee6f9b6e5df3d9dec9d8eeeb8ff2063dfc30db9f43503bf5-userdata-shm.mount: Deactivated successfully.
Feb 19 20:37:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4188f91374d22aefb111b16ec42944fa6f82fc968b49fa4da9404352f73dda6-merged.mount: Deactivated successfully.
Feb 19 20:37:09 compute-0 podman[254087]: 2026-02-19 20:37:09.299856984 +0000 UTC m=+0.127852266 container cleanup 9eb6c2b6e3c5a5dbee6f9b6e5df3d9dec9d8eeeb8ff2063dfc30db9f43503bf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 19 20:37:09 compute-0 systemd[1]: libpod-conmon-9eb6c2b6e3c5a5dbee6f9b6e5df3d9dec9d8eeeb8ff2063dfc30db9f43503bf5.scope: Deactivated successfully.
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.330 188781 DEBUG oslo_concurrency.processutils [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/da31f324-38ad-4f77-b724-3ef1628be336/disk.config --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.381 188781 DEBUG oslo_concurrency.processutils [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/da31f324-38ad-4f77-b724-3ef1628be336/disk.config --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.384 188781 DEBUG oslo_concurrency.lockutils [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Acquiring lock "/var/lib/nova/instances/da31f324-38ad-4f77-b724-3ef1628be336/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.386 188781 DEBUG oslo_concurrency.lockutils [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Lock "/var/lib/nova/instances/da31f324-38ad-4f77-b724-3ef1628be336/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.389 188781 DEBUG oslo_concurrency.lockutils [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Lock "/var/lib/nova/instances/da31f324-38ad-4f77-b724-3ef1628be336/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:37:09 compute-0 podman[254115]: 2026-02-19 20:37:09.390048606 +0000 UTC m=+0.050680061 container remove 9eb6c2b6e3c5a5dbee6f9b6e5df3d9dec9d8eeeb8ff2063dfc30db9f43503bf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.390 188781 DEBUG nova.virt.libvirt.vif [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-19T20:35:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-541687296',display_name='tempest-ServerActionsTestJSON-server-541687296',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-541687296',id=7,image_ref='17b9bce8-a91b-495d-ac33-cf63893413f9',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBGUXfqYgLpnaK0US8EwHfzCPv+m8vpQ+fPWU8q/hF6l9cNu9x6P14aljSv28A+SD7n7yEsgSHzHQXS8tsguQqzUZEu4v3AxpVAXh2tIOAWxaA3uNPd6KcWlT+WQySBOhg==',key_name='tempest-keypair-175997513',keypairs=<?>,launch_index=0,launched_at=2026-02-19T20:35:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3c8b3e035bb347acad9c4027457ee296',ramdisk_id='',reservation_id='r-a1krnbi0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='17b9bce8-a91b-495d-ac33-cf63893413f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1818290169',owner_user_name='tempest-ServerActionsTestJSON-1818290169-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-19T20:37:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='43931603bc9f40eab8e548129d4c50cb',uuid=da31f324-38ad-4f77-b724-3ef1628be336,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2", "address": "fa:16:3e:c6:08:9f", "network": {"id": "d02e853c-7c37-4c12-a959-0da0ff097734", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-432434488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c8b3e035bb347acad9c4027457ee296", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9a6ef82-e3", "ovs_interfaceid": "b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.391 188781 DEBUG nova.network.os_vif_util [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Converting VIF {"id": "b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2", "address": "fa:16:3e:c6:08:9f", "network": {"id": "d02e853c-7c37-4c12-a959-0da0ff097734", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-432434488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c8b3e035bb347acad9c4027457ee296", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9a6ef82-e3", "ovs_interfaceid": "b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.392 188781 DEBUG nova.network.os_vif_util [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c6:08:9f,bridge_name='br-int',has_traffic_filtering=True,id=b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2,network=Network(d02e853c-7c37-4c12-a959-0da0ff097734),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9a6ef82-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.393 188781 DEBUG nova.objects.instance [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Lazy-loading 'pci_devices' on Instance uuid da31f324-38ad-4f77-b724-3ef1628be336 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:37:09 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:09.394 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[644e63f6-a442-4f89-93b8-f0e93695df27]: (4, ('Thu Feb 19 08:37:09 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734 (9eb6c2b6e3c5a5dbee6f9b6e5df3d9dec9d8eeeb8ff2063dfc30db9f43503bf5)\n9eb6c2b6e3c5a5dbee6f9b6e5df3d9dec9d8eeeb8ff2063dfc30db9f43503bf5\nThu Feb 19 08:37:09 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734 (9eb6c2b6e3c5a5dbee6f9b6e5df3d9dec9d8eeeb8ff2063dfc30db9f43503bf5)\n9eb6c2b6e3c5a5dbee6f9b6e5df3d9dec9d8eeeb8ff2063dfc30db9f43503bf5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:37:09 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:09.395 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[dcd29db0-2d00-49dd-a56e-42c79e1b8da2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:37:09 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:09.396 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd02e853c-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.398 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:37:09 compute-0 kernel: tapd02e853c-70: left promiscuous mode
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.403 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:37:09 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:09.407 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[cd7dddae-c73a-4972-baa1-a28ec3a6059e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.409 188781 DEBUG nova.virt.libvirt.driver [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] End _get_guest_xml xml=<domain type="kvm">
Feb 19 20:37:09 compute-0 nova_compute[188777]:   <uuid>da31f324-38ad-4f77-b724-3ef1628be336</uuid>
Feb 19 20:37:09 compute-0 nova_compute[188777]:   <name>instance-00000007</name>
Feb 19 20:37:09 compute-0 nova_compute[188777]:   <memory>131072</memory>
Feb 19 20:37:09 compute-0 nova_compute[188777]:   <vcpu>1</vcpu>
Feb 19 20:37:09 compute-0 nova_compute[188777]:   <metadata>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 19 20:37:09 compute-0 nova_compute[188777]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:       <nova:name>tempest-ServerActionsTestJSON-server-541687296</nova:name>
Feb 19 20:37:09 compute-0 nova_compute[188777]:       <nova:creationTime>2026-02-19 20:37:09</nova:creationTime>
Feb 19 20:37:09 compute-0 nova_compute[188777]:       <nova:flavor name="m1.nano">
Feb 19 20:37:09 compute-0 nova_compute[188777]:         <nova:memory>128</nova:memory>
Feb 19 20:37:09 compute-0 nova_compute[188777]:         <nova:disk>1</nova:disk>
Feb 19 20:37:09 compute-0 nova_compute[188777]:         <nova:swap>0</nova:swap>
Feb 19 20:37:09 compute-0 nova_compute[188777]:         <nova:ephemeral>0</nova:ephemeral>
Feb 19 20:37:09 compute-0 nova_compute[188777]:         <nova:vcpus>1</nova:vcpus>
Feb 19 20:37:09 compute-0 nova_compute[188777]:       </nova:flavor>
Feb 19 20:37:09 compute-0 nova_compute[188777]:       <nova:owner>
Feb 19 20:37:09 compute-0 nova_compute[188777]:         <nova:user uuid="43931603bc9f40eab8e548129d4c50cb">tempest-ServerActionsTestJSON-1818290169-project-member</nova:user>
Feb 19 20:37:09 compute-0 nova_compute[188777]:         <nova:project uuid="3c8b3e035bb347acad9c4027457ee296">tempest-ServerActionsTestJSON-1818290169</nova:project>
Feb 19 20:37:09 compute-0 nova_compute[188777]:       </nova:owner>
Feb 19 20:37:09 compute-0 nova_compute[188777]:       <nova:root type="image" uuid="17b9bce8-a91b-495d-ac33-cf63893413f9"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:       <nova:ports>
Feb 19 20:37:09 compute-0 nova_compute[188777]:         <nova:port uuid="b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2">
Feb 19 20:37:09 compute-0 nova_compute[188777]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:         </nova:port>
Feb 19 20:37:09 compute-0 nova_compute[188777]:       </nova:ports>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     </nova:instance>
Feb 19 20:37:09 compute-0 nova_compute[188777]:   </metadata>
Feb 19 20:37:09 compute-0 nova_compute[188777]:   <sysinfo type="smbios">
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <system>
Feb 19 20:37:09 compute-0 nova_compute[188777]:       <entry name="manufacturer">RDO</entry>
Feb 19 20:37:09 compute-0 nova_compute[188777]:       <entry name="product">OpenStack Compute</entry>
Feb 19 20:37:09 compute-0 nova_compute[188777]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 19 20:37:09 compute-0 nova_compute[188777]:       <entry name="serial">da31f324-38ad-4f77-b724-3ef1628be336</entry>
Feb 19 20:37:09 compute-0 nova_compute[188777]:       <entry name="uuid">da31f324-38ad-4f77-b724-3ef1628be336</entry>
Feb 19 20:37:09 compute-0 nova_compute[188777]:       <entry name="family">Virtual Machine</entry>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     </system>
Feb 19 20:37:09 compute-0 nova_compute[188777]:   </sysinfo>
Feb 19 20:37:09 compute-0 nova_compute[188777]:   <os>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <boot dev="hd"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <smbios mode="sysinfo"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:   </os>
Feb 19 20:37:09 compute-0 nova_compute[188777]:   <features>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <acpi/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <apic/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <vmcoreinfo/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:   </features>
Feb 19 20:37:09 compute-0 nova_compute[188777]:   <clock offset="utc">
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <timer name="pit" tickpolicy="delay"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <timer name="hpet" present="no"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:   </clock>
Feb 19 20:37:09 compute-0 nova_compute[188777]:   <cpu mode="host-model" match="exact">
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <topology sockets="1" cores="1" threads="1"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:   </cpu>
Feb 19 20:37:09 compute-0 nova_compute[188777]:   <devices>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <disk type="file" device="disk">
Feb 19 20:37:09 compute-0 nova_compute[188777]:       <driver name="qemu" type="qcow2" cache="none"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:       <source file="/var/lib/nova/instances/da31f324-38ad-4f77-b724-3ef1628be336/disk"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:       <target dev="vda" bus="virtio"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     </disk>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <disk type="file" device="cdrom">
Feb 19 20:37:09 compute-0 nova_compute[188777]:       <driver name="qemu" type="raw" cache="none"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:       <source file="/var/lib/nova/instances/da31f324-38ad-4f77-b724-3ef1628be336/disk.config"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:       <target dev="sda" bus="sata"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     </disk>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <interface type="ethernet">
Feb 19 20:37:09 compute-0 nova_compute[188777]:       <mac address="fa:16:3e:c6:08:9f"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:       <model type="virtio"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:       <driver name="vhost" rx_queue_size="512"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:       <mtu size="1442"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:       <target dev="tapb9a6ef82-e3"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     </interface>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <serial type="pty">
Feb 19 20:37:09 compute-0 nova_compute[188777]:       <log file="/var/lib/nova/instances/da31f324-38ad-4f77-b724-3ef1628be336/console.log" append="off"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     </serial>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <video>
Feb 19 20:37:09 compute-0 nova_compute[188777]:       <model type="virtio"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     </video>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <input type="tablet" bus="usb"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <input type="keyboard" bus="usb"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <rng model="virtio">
Feb 19 20:37:09 compute-0 nova_compute[188777]:       <backend model="random">/dev/urandom</backend>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     </rng>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <controller type="usb" index="0"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     <memballoon model="virtio">
Feb 19 20:37:09 compute-0 nova_compute[188777]:       <stats period="10"/>
Feb 19 20:37:09 compute-0 nova_compute[188777]:     </memballoon>
Feb 19 20:37:09 compute-0 nova_compute[188777]:   </devices>
Feb 19 20:37:09 compute-0 nova_compute[188777]: </domain>
Feb 19 20:37:09 compute-0 nova_compute[188777]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.416 188781 DEBUG oslo_concurrency.processutils [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/da31f324-38ad-4f77-b724-3ef1628be336/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:37:09 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:09.420 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[b608ee89-649a-493c-9b25-c3bbe8239f35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:37:09 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:09.423 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[724626df-5082-4604-8001-b7015b76100f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:37:09 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:09.438 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[c1b34e51-851a-40b5-af0f-45ccfead1c01]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 485647, 'reachable_time': 19194, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254132, 'error': None, 'target': 'ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:37:09 compute-0 systemd[1]: run-netns-ovnmeta\x2dd02e853c\x2d7c37\x2d4c12\x2da959\x2d0da0ff097734.mount: Deactivated successfully.
Feb 19 20:37:09 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:09.440 108698 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 19 20:37:09 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:09.440 108698 DEBUG oslo.privsep.daemon [-] privsep: reply[de24ece1-2346-4a21-bd8f-2bcbff2ce81a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.477 188781 DEBUG oslo_concurrency.processutils [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/da31f324-38ad-4f77-b724-3ef1628be336/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.478 188781 DEBUG oslo_concurrency.processutils [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/da31f324-38ad-4f77-b724-3ef1628be336/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.551 188781 DEBUG oslo_concurrency.processutils [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/da31f324-38ad-4f77-b724-3ef1628be336/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.561 188781 DEBUG nova.objects.instance [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Lazy-loading 'trusted_certs' on Instance uuid da31f324-38ad-4f77-b724-3ef1628be336 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.633 188781 DEBUG oslo_concurrency.processutils [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.680 188781 DEBUG oslo_concurrency.processutils [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.681 188781 DEBUG nova.virt.disk.api [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Checking if we can resize image /var/lib/nova/instances/da31f324-38ad-4f77-b724-3ef1628be336/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.682 188781 DEBUG oslo_concurrency.processutils [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/da31f324-38ad-4f77-b724-3ef1628be336/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.731 188781 DEBUG oslo_concurrency.processutils [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/da31f324-38ad-4f77-b724-3ef1628be336/disk --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.733 188781 DEBUG nova.virt.disk.api [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Cannot resize image /var/lib/nova/instances/da31f324-38ad-4f77-b724-3ef1628be336/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.734 188781 DEBUG nova.objects.instance [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Lazy-loading 'migration_context' on Instance uuid da31f324-38ad-4f77-b724-3ef1628be336 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.750 188781 DEBUG nova.virt.libvirt.vif [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-19T20:35:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-541687296',display_name='tempest-ServerActionsTestJSON-server-541687296',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-541687296',id=7,image_ref='17b9bce8-a91b-495d-ac33-cf63893413f9',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBGUXfqYgLpnaK0US8EwHfzCPv+m8vpQ+fPWU8q/hF6l9cNu9x6P14aljSv28A+SD7n7yEsgSHzHQXS8tsguQqzUZEu4v3AxpVAXh2tIOAWxaA3uNPd6KcWlT+WQySBOhg==',key_name='tempest-keypair-175997513',keypairs=<?>,launch_index=0,launched_at=2026-02-19T20:35:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='3c8b3e035bb347acad9c4027457ee296',ramdisk_id='',reservation_id='r-a1krnbi0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='17b9bce8-a91b-495d-ac33-cf63893413f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1818290169',owner_user_name='tempest-ServerActionsTestJSON-1818290169-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=None,updated_at=2026-02-19T20:37:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='43931603bc9f40eab8e548129d4c50cb',uuid=da31f324-38ad-4f77-b724-3ef1628be336,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2", "address": "fa:16:3e:c6:08:9f", "network": {"id": "d02e853c-7c37-4c12-a959-0da0ff097734", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-432434488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c8b3e035bb347acad9c4027457ee296", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9a6ef82-e3", "ovs_interfaceid": "b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.752 188781 DEBUG nova.network.os_vif_util [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Converting VIF {"id": "b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2", "address": "fa:16:3e:c6:08:9f", "network": {"id": "d02e853c-7c37-4c12-a959-0da0ff097734", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-432434488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c8b3e035bb347acad9c4027457ee296", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9a6ef82-e3", "ovs_interfaceid": "b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.754 188781 DEBUG nova.network.os_vif_util [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c6:08:9f,bridge_name='br-int',has_traffic_filtering=True,id=b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2,network=Network(d02e853c-7c37-4c12-a959-0da0ff097734),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9a6ef82-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.755 188781 DEBUG os_vif [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c6:08:9f,bridge_name='br-int',has_traffic_filtering=True,id=b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2,network=Network(d02e853c-7c37-4c12-a959-0da0ff097734),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9a6ef82-e3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.756 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.757 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.759 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.763 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.764 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb9a6ef82-e3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.765 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb9a6ef82-e3, col_values=(('external_ids', {'iface-id': 'b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c6:08:9f', 'vm-uuid': 'da31f324-38ad-4f77-b724-3ef1628be336'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:37:09 compute-0 NetworkManager[57033]: <info>  [1771533429.7698] manager: (tapb9a6ef82-e3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.772 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.776 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.777 188781 INFO os_vif [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c6:08:9f,bridge_name='br-int',has_traffic_filtering=True,id=b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2,network=Network(d02e853c-7c37-4c12-a959-0da0ff097734),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9a6ef82-e3')
Feb 19 20:37:09 compute-0 kernel: tapb9a6ef82-e3: entered promiscuous mode
Feb 19 20:37:09 compute-0 systemd-udevd[254048]: Network interface NamePolicy= disabled on kernel command line.
Feb 19 20:37:09 compute-0 NetworkManager[57033]: <info>  [1771533429.8427] manager: (tapb9a6ef82-e3): new Tun device (/org/freedesktop/NetworkManager/Devices/64)
Feb 19 20:37:09 compute-0 ovn_controller[98843]: 2026-02-19T20:37:09Z|00129|binding|INFO|Claiming lport b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 for this chassis.
Feb 19 20:37:09 compute-0 ovn_controller[98843]: 2026-02-19T20:37:09Z|00130|binding|INFO|b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2: Claiming fa:16:3e:c6:08:9f 10.100.0.13
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.849 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:37:09 compute-0 NetworkManager[57033]: <info>  [1771533429.8529] device (tapb9a6ef82-e3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 19 20:37:09 compute-0 ovn_controller[98843]: 2026-02-19T20:37:09Z|00131|binding|INFO|Setting lport b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 ovn-installed in OVS
Feb 19 20:37:09 compute-0 ovn_controller[98843]: 2026-02-19T20:37:09Z|00132|binding|INFO|Setting lport b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 up in Southbound
Feb 19 20:37:09 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:09.854 108175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:08:9f 10.100.0.13'], port_security=['fa:16:3e:c6:08:9f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'da31f324-38ad-4f77-b724-3ef1628be336', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d02e853c-7c37-4c12-a959-0da0ff097734', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3c8b3e035bb347acad9c4027457ee296', 'neutron:revision_number': '4', 'neutron:security_group_ids': '745eb45a-1fad-4b86-be2d-ed9c647c807b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.241'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5262953e-25bb-44de-850c-ced354d0d447, chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>], logical_port=b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 19 20:37:09 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:09.856 108175 INFO neutron.agent.ovn.metadata.agent [-] Port b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 in datapath d02e853c-7c37-4c12-a959-0da0ff097734 bound to our chassis
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.855 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:37:09 compute-0 NetworkManager[57033]: <info>  [1771533429.8592] device (tapb9a6ef82-e3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 19 20:37:09 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:09.858 108175 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d02e853c-7c37-4c12-a959-0da0ff097734
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.865 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:37:09 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:09.869 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[367b066a-50df-476b-807c-10ef55899b81]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:37:09 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:09.870 108175 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd02e853c-71 in ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 19 20:37:09 compute-0 nova_compute[188777]: 2026-02-19 20:37:09.870 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:37:09 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:09.873 242160 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd02e853c-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 19 20:37:09 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:09.873 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[cf2eedd1-82b9-49be-9b31-453e07679418]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:37:09 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:09.874 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[66b2aa8a-8c8c-4cb2-a2f6-885071526af6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:37:09 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:09.884 108698 DEBUG oslo.privsep.daemon [-] privsep: reply[3f9bfcb6-06a3-4822-9b2e-29b1fd96e108]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:37:09 compute-0 systemd-machined[158158]: New machine qemu-13-instance-00000007.
Feb 19 20:37:09 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:09.900 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[98bc11b1-fa04-4a47-b885-901f1cf945a5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:37:09 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-00000007.
Feb 19 20:37:09 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:09.922 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[8f3f11d9-8598-4c81-bbc8-a26130febcc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:37:09 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:09.930 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[a5684d06-cf7a-42a6-ac95-259e983e8d13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:37:09 compute-0 NetworkManager[57033]: <info>  [1771533429.9319] manager: (tapd02e853c-70): new Veth device (/org/freedesktop/NetworkManager/Devices/65)
Feb 19 20:37:09 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:09.960 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[d13dbff4-ce6e-4ec5-a15b-2d58bf616548]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:37:09 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:09.963 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[5a2ecae3-72c3-457e-a1ce-eecbcfdb55b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:37:09 compute-0 NetworkManager[57033]: <info>  [1771533429.9854] device (tapd02e853c-70): carrier: link connected
Feb 19 20:37:09 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:09.993 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[3fb0edd9-2707-4184-a6eb-2b88cf44c7a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:37:10 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:10.016 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[31402988-1f80-4828-bd12-d1e2e57e79ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd02e853c-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:2a:35'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 494145, 'reachable_time': 20697, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254190, 'error': None, 'target': 'ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:37:10 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:10.032 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[1b1a4691-9815-42ce-ba65-466b56de8522]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2a:2a35'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 494145, 'tstamp': 494145}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254191, 'error': None, 'target': 'ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:37:10 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:10.046 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[97fa9a93-84f4-4129-bd77-f7dd735b24ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd02e853c-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:2a:35'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 494145, 'reachable_time': 20697, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 254192, 'error': None, 'target': 'ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:37:10 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:10.082 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[df6287cb-0880-47e4-ab02-8a2e0932ec0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:37:10 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:10.143 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[acf4a445-668a-4cb5-a2e4-b55d8406d314]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:37:10 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:10.145 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd02e853c-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:37:10 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:10.146 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 19 20:37:10 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:10.147 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd02e853c-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:37:10 compute-0 kernel: tapd02e853c-70: entered promiscuous mode
Feb 19 20:37:10 compute-0 NetworkManager[57033]: <info>  [1771533430.1523] manager: (tapd02e853c-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Feb 19 20:37:10 compute-0 nova_compute[188777]: 2026-02-19 20:37:10.156 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:37:10 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:10.158 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd02e853c-70, col_values=(('external_ids', {'iface-id': 'c4f25fb9-c5df-4323-a436-ca67d28f2bc3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:37:10 compute-0 ovn_controller[98843]: 2026-02-19T20:37:10Z|00133|binding|INFO|Releasing lport c4f25fb9-c5df-4323-a436-ca67d28f2bc3 from this chassis (sb_readonly=0)
Feb 19 20:37:10 compute-0 nova_compute[188777]: 2026-02-19 20:37:10.160 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:37:10 compute-0 nova_compute[188777]: 2026-02-19 20:37:10.167 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:37:10 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:10.168 108175 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d02e853c-7c37-4c12-a959-0da0ff097734.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d02e853c-7c37-4c12-a959-0da0ff097734.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 19 20:37:10 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:10.169 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[4568f6b8-e2fb-48f3-9277-eb666f5f6605]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:37:10 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:10.171 108175 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 19 20:37:10 compute-0 ovn_metadata_agent[108170]: global
Feb 19 20:37:10 compute-0 ovn_metadata_agent[108170]:     log         /dev/log local0 debug
Feb 19 20:37:10 compute-0 ovn_metadata_agent[108170]:     log-tag     haproxy-metadata-proxy-d02e853c-7c37-4c12-a959-0da0ff097734
Feb 19 20:37:10 compute-0 ovn_metadata_agent[108170]:     user        root
Feb 19 20:37:10 compute-0 ovn_metadata_agent[108170]:     group       root
Feb 19 20:37:10 compute-0 ovn_metadata_agent[108170]:     maxconn     1024
Feb 19 20:37:10 compute-0 ovn_metadata_agent[108170]:     pidfile     /var/lib/neutron/external/pids/d02e853c-7c37-4c12-a959-0da0ff097734.pid.haproxy
Feb 19 20:37:10 compute-0 ovn_metadata_agent[108170]:     daemon
Feb 19 20:37:10 compute-0 ovn_metadata_agent[108170]: 
Feb 19 20:37:10 compute-0 ovn_metadata_agent[108170]: defaults
Feb 19 20:37:10 compute-0 ovn_metadata_agent[108170]:     log global
Feb 19 20:37:10 compute-0 ovn_metadata_agent[108170]:     mode http
Feb 19 20:37:10 compute-0 ovn_metadata_agent[108170]:     option httplog
Feb 19 20:37:10 compute-0 ovn_metadata_agent[108170]:     option dontlognull
Feb 19 20:37:10 compute-0 ovn_metadata_agent[108170]:     option http-server-close
Feb 19 20:37:10 compute-0 ovn_metadata_agent[108170]:     option forwardfor
Feb 19 20:37:10 compute-0 ovn_metadata_agent[108170]:     retries                 3
Feb 19 20:37:10 compute-0 ovn_metadata_agent[108170]:     timeout http-request    30s
Feb 19 20:37:10 compute-0 ovn_metadata_agent[108170]:     timeout connect         30s
Feb 19 20:37:10 compute-0 ovn_metadata_agent[108170]:     timeout client          32s
Feb 19 20:37:10 compute-0 ovn_metadata_agent[108170]:     timeout server          32s
Feb 19 20:37:10 compute-0 ovn_metadata_agent[108170]:     timeout http-keep-alive 30s
Feb 19 20:37:10 compute-0 ovn_metadata_agent[108170]: 
Feb 19 20:37:10 compute-0 ovn_metadata_agent[108170]: 
Feb 19 20:37:10 compute-0 ovn_metadata_agent[108170]: listen listener
Feb 19 20:37:10 compute-0 ovn_metadata_agent[108170]:     bind 169.254.169.254:80
Feb 19 20:37:10 compute-0 ovn_metadata_agent[108170]:     server metadata /var/lib/neutron/metadata_proxy
Feb 19 20:37:10 compute-0 ovn_metadata_agent[108170]:     http-request add-header X-OVN-Network-ID d02e853c-7c37-4c12-a959-0da0ff097734
Feb 19 20:37:10 compute-0 ovn_metadata_agent[108170]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 19 20:37:10 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:10.172 108175 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734', 'env', 'PROCESS_TAG=haproxy-d02e853c-7c37-4c12-a959-0da0ff097734', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d02e853c-7c37-4c12-a959-0da0ff097734.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 19 20:37:10 compute-0 nova_compute[188777]: 2026-02-19 20:37:10.203 188781 DEBUG nova.compute.manager [req-fa50b88d-a095-4a8f-af32-5d013f4fbb8f req-5a5a2f81-d654-441b-b903-447aff14026c 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Received event network-vif-unplugged-b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:37:10 compute-0 nova_compute[188777]: 2026-02-19 20:37:10.204 188781 DEBUG oslo_concurrency.lockutils [req-fa50b88d-a095-4a8f-af32-5d013f4fbb8f req-5a5a2f81-d654-441b-b903-447aff14026c 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "da31f324-38ad-4f77-b724-3ef1628be336-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:37:10 compute-0 nova_compute[188777]: 2026-02-19 20:37:10.204 188781 DEBUG oslo_concurrency.lockutils [req-fa50b88d-a095-4a8f-af32-5d013f4fbb8f req-5a5a2f81-d654-441b-b903-447aff14026c 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "da31f324-38ad-4f77-b724-3ef1628be336-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:37:10 compute-0 nova_compute[188777]: 2026-02-19 20:37:10.205 188781 DEBUG oslo_concurrency.lockutils [req-fa50b88d-a095-4a8f-af32-5d013f4fbb8f req-5a5a2f81-d654-441b-b903-447aff14026c 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "da31f324-38ad-4f77-b724-3ef1628be336-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:37:10 compute-0 nova_compute[188777]: 2026-02-19 20:37:10.205 188781 DEBUG nova.compute.manager [req-fa50b88d-a095-4a8f-af32-5d013f4fbb8f req-5a5a2f81-d654-441b-b903-447aff14026c 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] No waiting events found dispatching network-vif-unplugged-b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 19 20:37:10 compute-0 nova_compute[188777]: 2026-02-19 20:37:10.206 188781 WARNING nova.compute.manager [req-fa50b88d-a095-4a8f-af32-5d013f4fbb8f req-5a5a2f81-d654-441b-b903-447aff14026c 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Received unexpected event network-vif-unplugged-b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 for instance with vm_state active and task_state reboot_started_hard.
Feb 19 20:37:10 compute-0 podman[254221]: 2026-02-19 20:37:10.578564883 +0000 UTC m=+0.069990352 container create 49f8e4227c05fa2a6f8598820d2b3046cbffa83414ff9eecc79f5645fc23d58d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Feb 19 20:37:10 compute-0 systemd[1]: Started libpod-conmon-49f8e4227c05fa2a6f8598820d2b3046cbffa83414ff9eecc79f5645fc23d58d.scope.
Feb 19 20:37:10 compute-0 podman[254221]: 2026-02-19 20:37:10.536898135 +0000 UTC m=+0.028323624 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 19 20:37:10 compute-0 systemd[1]: Started libcrun container.
Feb 19 20:37:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ae0f001d38c83e5f6d95bf56f1ff302b551068f8d53b487dda7b917615cc946/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 19 20:37:10 compute-0 podman[254221]: 2026-02-19 20:37:10.676104435 +0000 UTC m=+0.167529904 container init 49f8e4227c05fa2a6f8598820d2b3046cbffa83414ff9eecc79f5645fc23d58d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Feb 19 20:37:10 compute-0 podman[254221]: 2026-02-19 20:37:10.68206112 +0000 UTC m=+0.173486579 container start 49f8e4227c05fa2a6f8598820d2b3046cbffa83414ff9eecc79f5645fc23d58d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Feb 19 20:37:10 compute-0 neutron-haproxy-ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734[254234]: [NOTICE]   (254238) : New worker (254240) forked
Feb 19 20:37:10 compute-0 neutron-haproxy-ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734[254234]: [NOTICE]   (254238) : Loading success.
Feb 19 20:37:11 compute-0 nova_compute[188777]: 2026-02-19 20:37:11.091 188781 DEBUG nova.virt.libvirt.host [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Removed pending event for da31f324-38ad-4f77-b724-3ef1628be336 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 19 20:37:11 compute-0 nova_compute[188777]: 2026-02-19 20:37:11.092 188781 DEBUG nova.virt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Emitting event <LifecycleEvent: 1771533431.0910454, da31f324-38ad-4f77-b724-3ef1628be336 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:37:11 compute-0 nova_compute[188777]: 2026-02-19 20:37:11.092 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: da31f324-38ad-4f77-b724-3ef1628be336] VM Resumed (Lifecycle Event)
Feb 19 20:37:11 compute-0 nova_compute[188777]: 2026-02-19 20:37:11.094 188781 DEBUG nova.compute.manager [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 19 20:37:11 compute-0 nova_compute[188777]: 2026-02-19 20:37:11.099 188781 INFO nova.virt.libvirt.driver [-] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Instance rebooted successfully.
Feb 19 20:37:11 compute-0 nova_compute[188777]: 2026-02-19 20:37:11.099 188781 DEBUG nova.compute.manager [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:37:11 compute-0 nova_compute[188777]: 2026-02-19 20:37:11.177 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:37:11 compute-0 nova_compute[188777]: 2026-02-19 20:37:11.183 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 19 20:37:11 compute-0 nova_compute[188777]: 2026-02-19 20:37:11.226 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: da31f324-38ad-4f77-b724-3ef1628be336] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Feb 19 20:37:11 compute-0 nova_compute[188777]: 2026-02-19 20:37:11.227 188781 DEBUG nova.virt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Emitting event <LifecycleEvent: 1771533431.0928085, da31f324-38ad-4f77-b724-3ef1628be336 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:37:11 compute-0 nova_compute[188777]: 2026-02-19 20:37:11.228 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: da31f324-38ad-4f77-b724-3ef1628be336] VM Started (Lifecycle Event)
Feb 19 20:37:11 compute-0 nova_compute[188777]: 2026-02-19 20:37:11.245 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:37:11 compute-0 nova_compute[188777]: 2026-02-19 20:37:11.252 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 19 20:37:11 compute-0 nova_compute[188777]: 2026-02-19 20:37:11.403 188781 DEBUG oslo_concurrency.lockutils [None req-f600f436-ca9f-4d3b-9987-a47ce6447091 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Lock "da31f324-38ad-4f77-b724-3ef1628be336" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 5.871s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:37:13 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:13.501 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e2fe6bb6-fad0-4563-8388-215a30f03e3f, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:37:13 compute-0 nova_compute[188777]: 2026-02-19 20:37:13.770 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:37:13 compute-0 nova_compute[188777]: 2026-02-19 20:37:13.799 188781 DEBUG nova.compute.manager [req-8a142c8f-fae4-40e5-bca8-3a76dc264163 req-5fa9028f-9eed-4348-916b-dda0ce2f6a15 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Received event network-vif-plugged-b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:37:13 compute-0 nova_compute[188777]: 2026-02-19 20:37:13.800 188781 DEBUG oslo_concurrency.lockutils [req-8a142c8f-fae4-40e5-bca8-3a76dc264163 req-5fa9028f-9eed-4348-916b-dda0ce2f6a15 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "da31f324-38ad-4f77-b724-3ef1628be336-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:37:13 compute-0 nova_compute[188777]: 2026-02-19 20:37:13.801 188781 DEBUG oslo_concurrency.lockutils [req-8a142c8f-fae4-40e5-bca8-3a76dc264163 req-5fa9028f-9eed-4348-916b-dda0ce2f6a15 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "da31f324-38ad-4f77-b724-3ef1628be336-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:37:13 compute-0 nova_compute[188777]: 2026-02-19 20:37:13.801 188781 DEBUG oslo_concurrency.lockutils [req-8a142c8f-fae4-40e5-bca8-3a76dc264163 req-5fa9028f-9eed-4348-916b-dda0ce2f6a15 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "da31f324-38ad-4f77-b724-3ef1628be336-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:37:13 compute-0 nova_compute[188777]: 2026-02-19 20:37:13.802 188781 DEBUG nova.compute.manager [req-8a142c8f-fae4-40e5-bca8-3a76dc264163 req-5fa9028f-9eed-4348-916b-dda0ce2f6a15 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] No waiting events found dispatching network-vif-plugged-b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 19 20:37:13 compute-0 nova_compute[188777]: 2026-02-19 20:37:13.802 188781 WARNING nova.compute.manager [req-8a142c8f-fae4-40e5-bca8-3a76dc264163 req-5fa9028f-9eed-4348-916b-dda0ce2f6a15 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Received unexpected event network-vif-plugged-b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 for instance with vm_state active and task_state None.
Feb 19 20:37:13 compute-0 nova_compute[188777]: 2026-02-19 20:37:13.802 188781 DEBUG nova.compute.manager [req-8a142c8f-fae4-40e5-bca8-3a76dc264163 req-5fa9028f-9eed-4348-916b-dda0ce2f6a15 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Received event network-vif-plugged-b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:37:13 compute-0 nova_compute[188777]: 2026-02-19 20:37:13.803 188781 DEBUG oslo_concurrency.lockutils [req-8a142c8f-fae4-40e5-bca8-3a76dc264163 req-5fa9028f-9eed-4348-916b-dda0ce2f6a15 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "da31f324-38ad-4f77-b724-3ef1628be336-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:37:13 compute-0 nova_compute[188777]: 2026-02-19 20:37:13.803 188781 DEBUG oslo_concurrency.lockutils [req-8a142c8f-fae4-40e5-bca8-3a76dc264163 req-5fa9028f-9eed-4348-916b-dda0ce2f6a15 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "da31f324-38ad-4f77-b724-3ef1628be336-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:37:13 compute-0 nova_compute[188777]: 2026-02-19 20:37:13.804 188781 DEBUG oslo_concurrency.lockutils [req-8a142c8f-fae4-40e5-bca8-3a76dc264163 req-5fa9028f-9eed-4348-916b-dda0ce2f6a15 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "da31f324-38ad-4f77-b724-3ef1628be336-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:37:13 compute-0 nova_compute[188777]: 2026-02-19 20:37:13.804 188781 DEBUG nova.compute.manager [req-8a142c8f-fae4-40e5-bca8-3a76dc264163 req-5fa9028f-9eed-4348-916b-dda0ce2f6a15 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] No waiting events found dispatching network-vif-plugged-b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 19 20:37:13 compute-0 nova_compute[188777]: 2026-02-19 20:37:13.805 188781 WARNING nova.compute.manager [req-8a142c8f-fae4-40e5-bca8-3a76dc264163 req-5fa9028f-9eed-4348-916b-dda0ce2f6a15 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Received unexpected event network-vif-plugged-b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 for instance with vm_state active and task_state None.
Feb 19 20:37:13 compute-0 nova_compute[188777]: 2026-02-19 20:37:13.805 188781 DEBUG nova.compute.manager [req-8a142c8f-fae4-40e5-bca8-3a76dc264163 req-5fa9028f-9eed-4348-916b-dda0ce2f6a15 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Received event network-vif-plugged-b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:37:13 compute-0 nova_compute[188777]: 2026-02-19 20:37:13.806 188781 DEBUG oslo_concurrency.lockutils [req-8a142c8f-fae4-40e5-bca8-3a76dc264163 req-5fa9028f-9eed-4348-916b-dda0ce2f6a15 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "da31f324-38ad-4f77-b724-3ef1628be336-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:37:13 compute-0 nova_compute[188777]: 2026-02-19 20:37:13.806 188781 DEBUG oslo_concurrency.lockutils [req-8a142c8f-fae4-40e5-bca8-3a76dc264163 req-5fa9028f-9eed-4348-916b-dda0ce2f6a15 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "da31f324-38ad-4f77-b724-3ef1628be336-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:37:13 compute-0 nova_compute[188777]: 2026-02-19 20:37:13.806 188781 DEBUG oslo_concurrency.lockutils [req-8a142c8f-fae4-40e5-bca8-3a76dc264163 req-5fa9028f-9eed-4348-916b-dda0ce2f6a15 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "da31f324-38ad-4f77-b724-3ef1628be336-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:37:13 compute-0 nova_compute[188777]: 2026-02-19 20:37:13.807 188781 DEBUG nova.compute.manager [req-8a142c8f-fae4-40e5-bca8-3a76dc264163 req-5fa9028f-9eed-4348-916b-dda0ce2f6a15 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] No waiting events found dispatching network-vif-plugged-b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 19 20:37:13 compute-0 nova_compute[188777]: 2026-02-19 20:37:13.807 188781 WARNING nova.compute.manager [req-8a142c8f-fae4-40e5-bca8-3a76dc264163 req-5fa9028f-9eed-4348-916b-dda0ce2f6a15 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Received unexpected event network-vif-plugged-b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 for instance with vm_state active and task_state None.
Feb 19 20:37:14 compute-0 nova_compute[188777]: 2026-02-19 20:37:14.768 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:37:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:15.152 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 19 20:37:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:15.152 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 19 20:37:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:15.152 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f66d95e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:37:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:15.153 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fa4f6728800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:37:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:15.154 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f66d95e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:37:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:15.154 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f66d95e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:37:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:15.154 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f66d95e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:37:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f66d95e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:37:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f66d95e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:37:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f66d95e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:37:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f66d95e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:37:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f66d95e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:37:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f66d95e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:37:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a390>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f66d95e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:37:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f66d95e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:37:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f66d95e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:37:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f66d95e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:37:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f66d95e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:37:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f66d95e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:37:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f66d95e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:37:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f66d95e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:37:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:15.156 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f66d95e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:37:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:15.156 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f66d95e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:37:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:15.156 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f66d95e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:37:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:15.156 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f66d95e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:37:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:15.156 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f66d95e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:37:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:15.156 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f66d95e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:37:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:15.156 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f66d95e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:37:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:15.156 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f66d95e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:37:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:15.159 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 997ebdcf-7eab-485b-8fbf-d21112c78946 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Feb 19 20:37:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:15.161 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/997ebdcf-7eab-485b-8fbf-d21112c78946 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}eb82bb0a04ff18fe5ce8169193b61d179e0542ea510a5cad5008c259e31f58a8" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Feb 19 20:37:15 compute-0 nova_compute[188777]: 2026-02-19 20:37:15.797 188781 INFO nova.compute.manager [None req-108b160d-1ec8-4c97-8fe9-fb516a421e48 ef20d0162e404953a8f45beac9fadf18 eb9e3732b9f4456d9f90bf3e156f6f7c - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Get console output
Feb 19 20:37:15 compute-0 nova_compute[188777]: 2026-02-19 20:37:15.916 242038 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 19 20:37:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:16.281 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1994 Content-Type: application/json Date: Thu, 19 Feb 2026 20:37:15 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-d3c2b1ef-f797-4283-a0c8-421fda524974 x-openstack-request-id: req-d3c2b1ef-f797-4283-a0c8-421fda524974 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Feb 19 20:37:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:16.281 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "997ebdcf-7eab-485b-8fbf-d21112c78946", "name": "tempest-AttachInterfacesUnderV243Test-server-684728485", "status": "ACTIVE", "tenant_id": "54ce0de2bf12421a9458013ccaa2dcad", "user_id": "90c9e30d17534357bece36d1acaab39c", "metadata": {}, "hostId": "f46cf9989db3abf7517c94fba8fc996a8b55c81d8ccd61b23f3020bd", "image": {"id": "17b9bce8-a91b-495d-ac33-cf63893413f9", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/17b9bce8-a91b-495d-ac33-cf63893413f9"}]}, "flavor": {"id": "68c4e072-7c2b-48a1-8e07-0fd69e153270", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/68c4e072-7c2b-48a1-8e07-0fd69e153270"}]}, "created": "2026-02-19T20:36:15Z", "updated": "2026-02-19T20:36:31Z", "addresses": {"tempest-AttachInterfacesUnderV243Test-572210270-network": [{"version": 4, "addr": "10.100.0.3", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:f7:60:ee"}, {"version": 4, "addr": "192.168.122.211", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:f7:60:ee"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/997ebdcf-7eab-485b-8fbf-d21112c78946"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/997ebdcf-7eab-485b-8fbf-d21112c78946"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-148468656", "OS-SRV-USG:launched_at": "2026-02-19T20:36:31.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--1046838934"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000009", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Feb 19 20:37:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:16.283 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/997ebdcf-7eab-485b-8fbf-d21112c78946 used request id req-d3c2b1ef-f797-4283-a0c8-421fda524974 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Feb 19 20:37:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:16.284 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '997ebdcf-7eab-485b-8fbf-d21112c78946', 'name': 'tempest-AttachInterfacesUnderV243Test-server-684728485', 'flavor': {'id': '68c4e072-7c2b-48a1-8e07-0fd69e153270', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '17b9bce8-a91b-495d-ac33-cf63893413f9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000009', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '54ce0de2bf12421a9458013ccaa2dcad', 'user_id': '90c9e30d17534357bece36d1acaab39c', 'hostId': 'f46cf9989db3abf7517c94fba8fc996a8b55c81d8ccd61b23f3020bd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:37:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:16.287 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance da31f324-38ad-4f77-b724-3ef1628be336 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Feb 19 20:37:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:16.288 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/da31f324-38ad-4f77-b724-3ef1628be336 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}eb82bb0a04ff18fe5ce8169193b61d179e0542ea510a5cad5008c259e31f58a8" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Feb 19 20:37:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:17.219 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1979 Content-Type: application/json Date: Thu, 19 Feb 2026 20:37:16 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-b6ee7fd5-a3e8-41e9-a371-f849e2a23990 x-openstack-request-id: req-b6ee7fd5-a3e8-41e9-a371-f849e2a23990 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Feb 19 20:37:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:17.220 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "da31f324-38ad-4f77-b724-3ef1628be336", "name": "tempest-ServerActionsTestJSON-server-541687296", "status": "ACTIVE", "tenant_id": "3c8b3e035bb347acad9c4027457ee296", "user_id": "43931603bc9f40eab8e548129d4c50cb", "metadata": {}, "hostId": "7ebfc09032a75ba68e7acd80ecfd00ce5b18f0be8f786786e22620e9", "image": {"id": "17b9bce8-a91b-495d-ac33-cf63893413f9", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/17b9bce8-a91b-495d-ac33-cf63893413f9"}]}, "flavor": {"id": "68c4e072-7c2b-48a1-8e07-0fd69e153270", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/68c4e072-7c2b-48a1-8e07-0fd69e153270"}]}, "created": "2026-02-19T20:35:30Z", "updated": "2026-02-19T20:37:11Z", "addresses": {"tempest-ServerActionsTestJSON-432434488-network": [{"version": 4, "addr": "10.100.0.13", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:c6:08:9f"}, {"version": 4, "addr": "192.168.122.241", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:c6:08:9f"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/da31f324-38ad-4f77-b724-3ef1628be336"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/da31f324-38ad-4f77-b724-3ef1628be336"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-175997513", "OS-SRV-USG:launched_at": "2026-02-19T20:35:49.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--1883141367"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000007", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Feb 19 20:37:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:17.220 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/da31f324-38ad-4f77-b724-3ef1628be336 used request id req-b6ee7fd5-a3e8-41e9-a371-f849e2a23990 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Feb 19 20:37:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:17.221 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'da31f324-38ad-4f77-b724-3ef1628be336', 'name': 'tempest-ServerActionsTestJSON-server-541687296', 'flavor': {'id': '68c4e072-7c2b-48a1-8e07-0fd69e153270', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '17b9bce8-a91b-495d-ac33-cf63893413f9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000007', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3c8b3e035bb347acad9c4027457ee296', 'user_id': '43931603bc9f40eab8e548129d4c50cb', 'hostId': '7ebfc09032a75ba68e7acd80ecfd00ce5b18f0be8f786786e22620e9', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:37:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:17.224 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance dff9d513-54f8-4d73-acf7-df610dc4d064 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Feb 19 20:37:17 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:17.225 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/dff9d513-54f8-4d73-acf7-df610dc4d064 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}eb82bb0a04ff18fe5ce8169193b61d179e0542ea510a5cad5008c259e31f58a8" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Feb 19 20:37:18 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:18.200 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1975 Content-Type: application/json Date: Thu, 19 Feb 2026 20:37:17 GMT Keep-Alive: timeout=5, max=98 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-34a9e0d8-bb30-43b2-bfc7-a5b6ada7c186 x-openstack-request-id: req-34a9e0d8-bb30-43b2-bfc7-a5b6ada7c186 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Feb 19 20:37:18 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:18.201 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "dff9d513-54f8-4d73-acf7-df610dc4d064", "name": "tempest-TestNetworkBasicOps-server-215985627", "status": "ACTIVE", "tenant_id": "eb9e3732b9f4456d9f90bf3e156f6f7c", "user_id": "ef20d0162e404953a8f45beac9fadf18", "metadata": {}, "hostId": "f5b284f60221ec4908d310f9d0c4e0647a5dcc4e862839352782ffc8", "image": {"id": "17b9bce8-a91b-495d-ac33-cf63893413f9", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/17b9bce8-a91b-495d-ac33-cf63893413f9"}]}, "flavor": {"id": "68c4e072-7c2b-48a1-8e07-0fd69e153270", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/68c4e072-7c2b-48a1-8e07-0fd69e153270"}]}, "created": "2026-02-19T20:36:19Z", "updated": "2026-02-19T20:36:33Z", "addresses": {"tempest-network-smoke--1477620676": [{"version": 4, "addr": "10.100.0.4", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:c2:a8:ee"}, {"version": 4, "addr": "192.168.122.216", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:c2:a8:ee"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/dff9d513-54f8-4d73-acf7-df610dc4d064"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/dff9d513-54f8-4d73-acf7-df610dc4d064"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestNetworkBasicOps-1830742569", "OS-SRV-USG:launched_at": "2026-02-19T20:36:33.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-secgroup-smoke-1972760386"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000b", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Feb 19 20:37:18 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:18.201 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/dff9d513-54f8-4d73-acf7-df610dc4d064 used request id req-34a9e0d8-bb30-43b2-bfc7-a5b6ada7c186 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Feb 19 20:37:18 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:18.202 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'dff9d513-54f8-4d73-acf7-df610dc4d064', 'name': 'tempest-TestNetworkBasicOps-server-215985627', 'flavor': {'id': '68c4e072-7c2b-48a1-8e07-0fd69e153270', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '17b9bce8-a91b-495d-ac33-cf63893413f9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'eb9e3732b9f4456d9f90bf3e156f6f7c', 'user_id': 'ef20d0162e404953a8f45beac9fadf18', 'hostId': 'f5b284f60221ec4908d310f9d0c4e0647a5dcc4e862839352782ffc8', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:37:18 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:18.205 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 1b6b1397-fda7-4470-883b-1cc5974fac84 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Feb 19 20:37:18 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:18.206 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/1b6b1397-fda7-4470-883b-1cc5974fac84 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}eb82bb0a04ff18fe5ce8169193b61d179e0542ea510a5cad5008c259e31f58a8" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Feb 19 20:37:18 compute-0 nova_compute[188777]: 2026-02-19 20:37:18.772 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:37:19 compute-0 nova_compute[188777]: 2026-02-19 20:37:19.772 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.015 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1832 Content-Type: application/json Date: Thu, 19 Feb 2026 20:37:18 GMT Keep-Alive: timeout=5, max=97 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-4ad55f47-aa30-4188-afa7-26ab10223d83 x-openstack-request-id: req-4ad55f47-aa30-4188-afa7-26ab10223d83 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.016 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "1b6b1397-fda7-4470-883b-1cc5974fac84", "name": "te-4749372-asg-gqiuwwiovj7t-inxwtqyxfrgl-i7ynim6swjio", "status": "ACTIVE", "tenant_id": "3e54c3b3dadc42fca16da4cb7212a2db", "user_id": "4495bf20aedd42ff97fdae62ef729522", "metadata": {"metering.server_group": "08c5967c-a408-49e3-be73-425b7dd8ee8c"}, "hostId": "22c8c0ddb7108a2907037af7b4f06c9d19e2238520664206bd96d609", "image": {"id": "e98a7b34-d7ef-4dcd-b1f3-0a369d480f18", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/e98a7b34-d7ef-4dcd-b1f3-0a369d480f18"}]}, "flavor": {"id": "68c4e072-7c2b-48a1-8e07-0fd69e153270", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/68c4e072-7c2b-48a1-8e07-0fd69e153270"}]}, "created": "2026-02-19T20:36:48Z", "updated": "2026-02-19T20:36:59Z", "addresses": {"": [{"version": 4, "addr": "10.100.1.142", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:56:ea:b9"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/1b6b1397-fda7-4470-883b-1cc5974fac84"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/1b6b1397-fda7-4470-883b-1cc5974fac84"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-02-19T20:36:59.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000c", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.016 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/1b6b1397-fda7-4470-883b-1cc5974fac84 used request id req-4ad55f47-aa30-4188-afa7-26ab10223d83 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.017 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1b6b1397-fda7-4470-883b-1cc5974fac84', 'name': 'te-4749372-asg-gqiuwwiovj7t-inxwtqyxfrgl-i7ynim6swjio', 'flavor': {'id': '68c4e072-7c2b-48a1-8e07-0fd69e153270', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'e98a7b34-d7ef-4dcd-b1f3-0a369d480f18'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000c', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3e54c3b3dadc42fca16da4cb7212a2db', 'user_id': '4495bf20aedd42ff97fdae62ef729522', 'hostId': '22c8c0ddb7108a2907037af7b4f06c9d19e2238520664206bd96d609', 'status': 'active', 'metadata': {'metering.server_group': '08c5967c-a408-49e3-be73-425b7dd8ee8c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.017 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.017 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.017 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.017 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.019 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-19T20:37:20.017928) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.023 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 997ebdcf-7eab-485b-8fbf-d21112c78946 / tap44b4451c-db inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.024 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.031 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for da31f324-38ad-4f77-b724-3ef1628be336 / tapb9a6ef82-e3 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.031 15 DEBUG ceilometer.compute.pollsters [-] da31f324-38ad-4f77-b724-3ef1628be336/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.037 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for dff9d513-54f8-4d73-acf7-df610dc4d064 / tap913d86d2-68 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.038 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.042 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 1b6b1397-fda7-4470-883b-1cc5974fac84 / tap3b9e0369-31 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.042 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.043 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.043 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fa4f672a480>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.043 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.043 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.044 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.044 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.044 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.044 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-684728485>, <NovaLikeServer: tempest-ServerActionsTestJSON-server-541687296>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-215985627>, <NovaLikeServer: te-4749372-asg-gqiuwwiovj7t-inxwtqyxfrgl-i7ynim6swjio>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-684728485>, <NovaLikeServer: tempest-ServerActionsTestJSON-server-541687296>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-215985627>, <NovaLikeServer: te-4749372-asg-gqiuwwiovj7t-inxwtqyxfrgl-i7ynim6swjio>]
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.045 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fa4f672a180>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.044 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-02-19T20:37:20.044083) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.045 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.045 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.045 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.045 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.045 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.045 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-19T20:37:20.045421) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.046 15 DEBUG ceilometer.compute.pollsters [-] da31f324-38ad-4f77-b724-3ef1628be336/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.046 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.outgoing.packets volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.046 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.047 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.047 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fa4f672bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.047 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.047 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.047 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.047 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.047 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.048 15 DEBUG ceilometer.compute.pollsters [-] da31f324-38ad-4f77-b724-3ef1628be336/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.048 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.048 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-19T20:37:20.047668) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.049 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.049 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.049 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fa4f672a270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.049 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.049 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.049 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.050 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.050 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.050 15 DEBUG ceilometer.compute.pollsters [-] da31f324-38ad-4f77-b724-3ef1628be336/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.050 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.outgoing.bytes volume: 15886 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.050 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.051 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.051 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fa4f6728ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.052 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.052 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-19T20:37:20.050009) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.052 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.052 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.052 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.052 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-19T20:37:20.052331) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.075 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.094 15 DEBUG ceilometer.compute.pollsters [-] da31f324-38ad-4f77-b724-3ef1628be336/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.119 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.143 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.143 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.144 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fa4f672a300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.144 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.144 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.144 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.144 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.144 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.144 15 DEBUG ceilometer.compute.pollsters [-] da31f324-38ad-4f77-b724-3ef1628be336/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.145 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.145 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.145 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.146 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fa4f672ab70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.146 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.146 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.146 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.146 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.146 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-19T20:37:20.144433) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.147 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-19T20:37:20.146689) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.168 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.168 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.180 15 DEBUG ceilometer.compute.pollsters [-] da31f324-38ad-4f77-b724-3ef1628be336/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.180 15 DEBUG ceilometer.compute.pollsters [-] da31f324-38ad-4f77-b724-3ef1628be336/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.193 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.194 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.206 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.206 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.206 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.207 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fa4f6728290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.207 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.207 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.207 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.207 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.208 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-19T20:37:20.207413) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.244 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.bytes volume: 30759424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.245 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.282 15 DEBUG ceilometer.compute.pollsters [-] da31f324-38ad-4f77-b724-3ef1628be336/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.291 15 DEBUG ceilometer.compute.pollsters [-] da31f324-38ad-4f77-b724-3ef1628be336/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.330 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.bytes volume: 30591488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.330 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.bytes volume: 274750 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.358 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.359 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.359 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.359 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fa4f69216a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.359 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.359 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.360 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.360 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.360 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/cpu volume: 32840000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.360 15 DEBUG ceilometer.compute.pollsters [-] da31f324-38ad-4f77-b724-3ef1628be336/cpu volume: 8620000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.360 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/cpu volume: 32620000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.361 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/cpu volume: 20510000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.361 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-19T20:37:20.360135) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.361 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.361 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fa4f67286b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.361 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.361 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a390>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.361 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a390>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.362 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.362 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.362 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-02-19T20:37:20.362051) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.362 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-684728485>, <NovaLikeServer: tempest-ServerActionsTestJSON-server-541687296>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-215985627>, <NovaLikeServer: te-4749372-asg-gqiuwwiovj7t-inxwtqyxfrgl-i7ynim6swjio>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-684728485>, <NovaLikeServer: tempest-ServerActionsTestJSON-server-541687296>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-215985627>, <NovaLikeServer: te-4749372-asg-gqiuwwiovj7t-inxwtqyxfrgl-i7ynim6swjio>]
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.362 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fa4f67283b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.362 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.362 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.362 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.363 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.363 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.latency volume: 893810108 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.363 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-19T20:37:20.363030) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.363 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.latency volume: 72441655 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.363 15 DEBUG ceilometer.compute.pollsters [-] da31f324-38ad-4f77-b724-3ef1628be336/disk.device.read.latency volume: 662486941 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.363 15 DEBUG ceilometer.compute.pollsters [-] da31f324-38ad-4f77-b724-3ef1628be336/disk.device.read.latency volume: 994610 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.364 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.latency volume: 1154094577 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.364 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.latency volume: 68730024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.364 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.read.latency volume: 642456600 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.364 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.read.latency volume: 4670537 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.365 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.365 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fa4f672a120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.365 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.365 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.365 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.365 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.365 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-19T20:37:20.365675) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.365 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.366 15 DEBUG ceilometer.compute.pollsters [-] da31f324-38ad-4f77-b724-3ef1628be336/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.366 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.366 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.367 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.367 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fa4f672a1b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.367 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.367 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.367 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.367 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.367 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.367 15 DEBUG ceilometer.compute.pollsters [-] da31f324-38ad-4f77-b724-3ef1628be336/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.368 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.368 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.368 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.368 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fa4f6728410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.369 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-19T20:37:20.367614) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.369 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.369 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.369 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.369 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.369 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.requests volume: 1111 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.369 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.370 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-19T20:37:20.369524) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.370 15 DEBUG ceilometer.compute.pollsters [-] da31f324-38ad-4f77-b724-3ef1628be336/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.370 15 DEBUG ceilometer.compute.pollsters [-] da31f324-38ad-4f77-b724-3ef1628be336/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.370 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.requests volume: 1098 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.370 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.requests volume: 108 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.371 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.371 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.371 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.372 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fa4f672a150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.372 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.372 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.372 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.372 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.372 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-19T20:37:20.372507) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.372 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.372 15 DEBUG ceilometer.compute.pollsters [-] da31f324-38ad-4f77-b724-3ef1628be336/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.373 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.incoming.packets volume: 115 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.373 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.373 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.373 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fa4f6728470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.374 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.374 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.374 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.374 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.374 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.374 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.374 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-19T20:37:20.374283) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.375 15 DEBUG ceilometer.compute.pollsters [-] da31f324-38ad-4f77-b724-3ef1628be336/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.375 15 DEBUG ceilometer.compute.pollsters [-] da31f324-38ad-4f77-b724-3ef1628be336/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.375 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.375 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.375 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.376 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.376 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.376 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fa4f68f6030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.376 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.376 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.377 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.377 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.377 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.bytes volume: 72929280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.377 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.377 15 DEBUG ceilometer.compute.pollsters [-] da31f324-38ad-4f77-b724-3ef1628be336/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.378 15 DEBUG ceilometer.compute.pollsters [-] da31f324-38ad-4f77-b724-3ef1628be336/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.378 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.bytes volume: 72953856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.378 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.378 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.378 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.379 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.379 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fa4f672ab10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.379 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.379 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.379 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.380 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.380 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.380 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.380 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-19T20:37:20.377134) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.380 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-19T20:37:20.380132) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.381 15 DEBUG ceilometer.compute.pollsters [-] da31f324-38ad-4f77-b724-3ef1628be336/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.381 15 DEBUG ceilometer.compute.pollsters [-] da31f324-38ad-4f77-b724-3ef1628be336/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.381 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.381 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.381 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.382 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.382 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.382 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fa4f6728500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.382 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.383 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.383 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.383 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.383 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.latency volume: 2816704884 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.383 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-19T20:37:20.383415) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.383 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.384 15 DEBUG ceilometer.compute.pollsters [-] da31f324-38ad-4f77-b724-3ef1628be336/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.384 15 DEBUG ceilometer.compute.pollsters [-] da31f324-38ad-4f77-b724-3ef1628be336/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.384 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.latency volume: 14933546637 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.385 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.385 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.385 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.386 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.386 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fa4f672a0c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.386 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.386 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.386 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.386 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.387 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.387 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-19T20:37:20.386938) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.387 15 DEBUG ceilometer.compute.pollsters [-] da31f324-38ad-4f77-b724-3ef1628be336/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.388 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.388 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.388 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.388 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fa4f6728560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.388 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.389 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.389 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.389 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.389 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.requests volume: 303 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.389 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.389 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-19T20:37:20.389199) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.389 15 DEBUG ceilometer.compute.pollsters [-] da31f324-38ad-4f77-b724-3ef1628be336/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.390 15 DEBUG ceilometer.compute.pollsters [-] da31f324-38ad-4f77-b724-3ef1628be336/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.390 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.requests volume: 284 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.390 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.390 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.391 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.391 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.391 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fa4f67285c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.391 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.391 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.391 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.391 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.392 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-19T20:37:20.391903) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.392 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.392 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fa4f6728620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.393 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.393 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.393 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.393 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.393 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.394 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fa4f672be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.394 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.394 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-19T20:37:20.393423) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.394 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.394 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.394 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.394 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/memory.usage volume: 42.62109375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.394 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-19T20:37:20.394579) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.395 15 DEBUG ceilometer.compute.pollsters [-] da31f324-38ad-4f77-b724-3ef1628be336/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.395 15 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance da31f324-38ad-4f77-b724-3ef1628be336: ceilometer.compute.pollsters.NoVolumeException
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.395 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/memory.usage volume: 42.7265625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.395 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.395 15 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 1b6b1397-fda7-4470-883b-1cc5974fac84: ceilometer.compute.pollsters.NoVolumeException
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.395 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.395 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fa4f672be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.396 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.396 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.396 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.396 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.396 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.incoming.bytes volume: 1796 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.396 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-19T20:37:20.396273) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.396 15 DEBUG ceilometer.compute.pollsters [-] da31f324-38ad-4f77-b724-3ef1628be336/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.397 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.incoming.bytes volume: 20170 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.397 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.397 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.398 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.398 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.398 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.398 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.398 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.398 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.398 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.398 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.399 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.399 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.399 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.399 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.399 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.399 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.399 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.399 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.399 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.399 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.399 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.399 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.399 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.399 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.399 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.400 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.400 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:37:20 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:37:20.400 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:37:20 compute-0 nova_compute[188777]: 2026-02-19 20:37:20.493 188781 DEBUG nova.compute.manager [req-897fccd7-365a-40e0-b322-b1ccc8fa8234 req-a780b724-04ab-44ac-9a86-5f66974aed31 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Received event network-changed-913d86d2-685f-4393-9143-efa6e9c6941a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:37:20 compute-0 nova_compute[188777]: 2026-02-19 20:37:20.494 188781 DEBUG nova.compute.manager [req-897fccd7-365a-40e0-b322-b1ccc8fa8234 req-a780b724-04ab-44ac-9a86-5f66974aed31 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Refreshing instance network info cache due to event network-changed-913d86d2-685f-4393-9143-efa6e9c6941a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 19 20:37:20 compute-0 nova_compute[188777]: 2026-02-19 20:37:20.494 188781 DEBUG oslo_concurrency.lockutils [req-897fccd7-365a-40e0-b322-b1ccc8fa8234 req-a780b724-04ab-44ac-9a86-5f66974aed31 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "refresh_cache-dff9d513-54f8-4d73-acf7-df610dc4d064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:37:20 compute-0 nova_compute[188777]: 2026-02-19 20:37:20.495 188781 DEBUG oslo_concurrency.lockutils [req-897fccd7-365a-40e0-b322-b1ccc8fa8234 req-a780b724-04ab-44ac-9a86-5f66974aed31 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquired lock "refresh_cache-dff9d513-54f8-4d73-acf7-df610dc4d064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:37:20 compute-0 nova_compute[188777]: 2026-02-19 20:37:20.495 188781 DEBUG nova.network.neutron [req-897fccd7-365a-40e0-b322-b1ccc8fa8234 req-a780b724-04ab-44ac-9a86-5f66974aed31 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Refreshing network info cache for port 913d86d2-685f-4393-9143-efa6e9c6941a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 19 20:37:23 compute-0 podman[254260]: 2026-02-19 20:37:23.408457489 +0000 UTC m=+0.080530382 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, com.redhat.component=ubi9-minimal-container, release=1770267347, io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, io.buildah.version=1.33.7, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-05T04:57:10Z, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, container_name=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9/ubi-minimal, version=9.7)
Feb 19 20:37:23 compute-0 podman[254261]: 2026-02-19 20:37:23.436317978 +0000 UTC m=+0.106505571 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 19 20:37:23 compute-0 nova_compute[188777]: 2026-02-19 20:37:23.776 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:37:23 compute-0 nova_compute[188777]: 2026-02-19 20:37:23.992 188781 DEBUG nova.network.neutron [req-897fccd7-365a-40e0-b322-b1ccc8fa8234 req-a780b724-04ab-44ac-9a86-5f66974aed31 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Updated VIF entry in instance network info cache for port 913d86d2-685f-4393-9143-efa6e9c6941a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 19 20:37:23 compute-0 nova_compute[188777]: 2026-02-19 20:37:23.992 188781 DEBUG nova.network.neutron [req-897fccd7-365a-40e0-b322-b1ccc8fa8234 req-a780b724-04ab-44ac-9a86-5f66974aed31 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Updating instance_info_cache with network_info: [{"id": "913d86d2-685f-4393-9143-efa6e9c6941a", "address": "fa:16:3e:c2:a8:ee", "network": {"id": "2194f0b2-0b56-4fa1-a2f7-0ec7651876c4", "bridge": "br-int", "label": "tempest-network-smoke--1477620676", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb9e3732b9f4456d9f90bf3e156f6f7c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap913d86d2-68", "ovs_interfaceid": "913d86d2-685f-4393-9143-efa6e9c6941a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:37:24 compute-0 nova_compute[188777]: 2026-02-19 20:37:24.470 188781 DEBUG oslo_concurrency.lockutils [req-897fccd7-365a-40e0-b322-b1ccc8fa8234 req-a780b724-04ab-44ac-9a86-5f66974aed31 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Releasing lock "refresh_cache-dff9d513-54f8-4d73-acf7-df610dc4d064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:37:24 compute-0 nova_compute[188777]: 2026-02-19 20:37:24.774 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:37:26 compute-0 podman[254307]: 2026-02-19 20:37:26.381278978 +0000 UTC m=+0.066966818 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 19 20:37:28 compute-0 nova_compute[188777]: 2026-02-19 20:37:28.777 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:37:29 compute-0 podman[204724]: time="2026-02-19T20:37:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:37:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:37:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32938 "" "Go-http-client/1.1"
Feb 19 20:37:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:37:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5774 "" "Go-http-client/1.1"
Feb 19 20:37:29 compute-0 nova_compute[188777]: 2026-02-19 20:37:29.777 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:37:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:30.454 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:37:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:30.455 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:37:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:37:30.455 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:37:31 compute-0 podman[254340]: 2026-02-19 20:37:31.40447107 +0000 UTC m=+0.076164934 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Feb 19 20:37:31 compute-0 openstack_network_exporter[207898]: ERROR   20:37:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:37:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:37:31 compute-0 podman[254339]: 2026-02-19 20:37:31.418201198 +0000 UTC m=+0.093083922 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., release=1214.1726694543, release-0.7.12=, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, distribution-scope=public, build-date=2024-09-18T21:23:30, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=kepler, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 19 20:37:31 compute-0 openstack_network_exporter[207898]: ERROR   20:37:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:37:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:37:33 compute-0 ovn_controller[98843]: 2026-02-19T20:37:33Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:56:ea:b9 10.100.1.142
Feb 19 20:37:33 compute-0 ovn_controller[98843]: 2026-02-19T20:37:33Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:56:ea:b9 10.100.1.142
Feb 19 20:37:33 compute-0 nova_compute[188777]: 2026-02-19 20:37:33.779 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:37:34 compute-0 sshd-session[254379]: Invalid user n8n from 160.187.147.124 port 40060
Feb 19 20:37:34 compute-0 sshd-session[254379]: Received disconnect from 160.187.147.124 port 40060:11: Bye Bye [preauth]
Feb 19 20:37:34 compute-0 sshd-session[254379]: Disconnected from invalid user n8n 160.187.147.124 port 40060 [preauth]
Feb 19 20:37:34 compute-0 nova_compute[188777]: 2026-02-19 20:37:34.780 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:37:35 compute-0 podman[254381]: 2026-02-19 20:37:35.393389633 +0000 UTC m=+0.067522345 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 19 20:37:37 compute-0 podman[254404]: 2026-02-19 20:37:37.403738149 +0000 UTC m=+0.087962582 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute)
Feb 19 20:37:38 compute-0 nova_compute[188777]: 2026-02-19 20:37:38.782 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:37:39 compute-0 podman[254424]: 2026-02-19 20:37:39.467451869 +0000 UTC m=+0.155691334 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2)
Feb 19 20:37:39 compute-0 nova_compute[188777]: 2026-02-19 20:37:39.782 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:37:42 compute-0 nova_compute[188777]: 2026-02-19 20:37:42.438 188781 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Feb 19 20:37:42 compute-0 nova_compute[188777]: [SQL: SELECT 1]
Feb 19 20:37:42 compute-0 nova_compute[188777]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Feb 19 20:37:42 compute-0 nova_compute[188777]: ['Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context\n    self.dialect.do_execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute\n    cursor.execute(statement, parameters)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 163, in execute\n    result = self._query(query)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 321, in _query\n    conn.query(q)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 505, in query\n    self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 724, in _read_query_result\n    result.read()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 1069, in read\n    first_packet = self.connection._read_packet()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 646, in _read_packet\n    packet_header = self._read_bytes(4)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 698, in _read_bytes\n    raise err.OperationalError(\n', "pymysql.err.OperationalError: (2013, 'Lost connection to MySQL server during query')\n", '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/sqlalchemy/engines.py", line 74, in _connect_ping_listener\n    connection.scalar(select(1))\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1262, in scalar\n    return self.execute(object_, *multiparams, **params).scalar()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1380, in execute\n    return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/sql/elements.py", line 334, in _execute_on_connection\n    return connection._execute_clauseelement(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1572, in _execute_clauseelement\n    ret = self._execute_context(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1943, in _execute_context\n    self._handle_dbapi_exception(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2122, in _handle_dbapi_exception\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context\n    self.dialect.do_execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute\n    cursor.execute(statement, parameters)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 163, in execute\n    result = self._query(query)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 321, in _query\n    conn.query(q)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 505, in query\n    self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 724, in _read_query_result\n    result.read()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 1069, in read\n    first_packet = self.connection._read_packet()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 646, in _read_packet\n    packet_header = self._read_bytes(4)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 698, in _read_bytes\n    raise err.OperationalError(\n', "oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query')\n[SQL: SELECT 1]\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n", '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1798, in _execute_context\n    conn = self._revalidate_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 646, in _revalidate_connection\n    self._dbapi_connection = self.engine.raw_connection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3368, in _wrap_pool_connect\n    util.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 120, in __init__\n    self.dispatch.engine_connect(self, _branch_from is not None)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/event/attr.py", line 334, in __call__\n    fn(*args, **kw)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/sqlalchemy/engines.py", line 84, in _connect_ping_listener\n    connection.scalar(select(1))\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1262, in scalar\n    return self.execute(object_, *multiparams, **params).scalar()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1380, in execute\n    return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/sql/elements.py", line 334, in _execute_on_connection\n    return connection._execute_clauseelement(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1572, in _execute_clauseelement\n    ret = self._execute_context(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1806, in _execute_context\n    self._handle_dbapi_exception(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2122, in _handle_dbapi_exception\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1798, in _execute_context\n    conn = self._revalidate_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 646, in _revalidate_connection\n    self._dbapi_connection = self.engine.raw_connection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3368, in _wrap_pool_connect\n    util.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n[SQL: SELECT 1]\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Feb 19 20:37:42 compute-0 nova_compute[188777]: 2026-02-19 20:37:42.438 188781 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Feb 19 20:37:42 compute-0 nova_compute[188777]: 2026-02-19 20:37:42.438 188781 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Feb 19 20:37:42 compute-0 nova_compute[188777]: 2026-02-19 20:37:42.438 188781 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Feb 19 20:37:42 compute-0 nova_compute[188777]: 2026-02-19 20:37:42.438 188781 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Feb 19 20:37:42 compute-0 nova_compute[188777]: 2026-02-19 20:37:42.438 188781 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Feb 19 20:37:42 compute-0 nova_compute[188777]: 2026-02-19 20:37:42.438 188781 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Feb 19 20:37:42 compute-0 nova_compute[188777]: 2026-02-19 20:37:42.438 188781 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Feb 19 20:37:42 compute-0 nova_compute[188777]: 2026-02-19 20:37:42.438 188781 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Feb 19 20:37:42 compute-0 nova_compute[188777]: 2026-02-19 20:37:42.438 188781 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Feb 19 20:37:42 compute-0 nova_compute[188777]: 2026-02-19 20:37:42.438 188781 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Feb 19 20:37:42 compute-0 nova_compute[188777]: 2026-02-19 20:37:42.438 188781 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Feb 19 20:37:42 compute-0 nova_compute[188777]: 2026-02-19 20:37:42.438 188781 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Feb 19 20:37:42 compute-0 nova_compute[188777]: 2026-02-19 20:37:42.438 188781 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Feb 19 20:37:42 compute-0 nova_compute[188777]: 2026-02-19 20:37:42.438 188781 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Feb 19 20:37:42 compute-0 nova_compute[188777]: 2026-02-19 20:37:42.438 188781 ERROR nova.servicegroup.drivers.db     raise result
Feb 19 20:37:42 compute-0 nova_compute[188777]: 2026-02-19 20:37:42.438 188781 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Feb 19 20:37:42 compute-0 nova_compute[188777]: 2026-02-19 20:37:42.438 188781 ERROR nova.servicegroup.drivers.db [SQL: SELECT 1]
Feb 19 20:37:42 compute-0 nova_compute[188777]: 2026-02-19 20:37:42.438 188781 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Feb 19 20:37:42 compute-0 nova_compute[188777]: 2026-02-19 20:37:42.438 188781 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context\n    self.dialect.do_execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute\n    cursor.execute(statement, parameters)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 163, in execute\n    result = self._query(query)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 321, in _query\n    conn.query(q)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 505, in query\n    self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 724, in _read_query_result\n    result.read()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 1069, in read\n    first_packet = self.connection._read_packet()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 646, in _read_packet\n    packet_header = self._read_bytes(4)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 698, in _read_bytes\n    raise err.OperationalError(\n', "pymysql.err.OperationalError: (2013, 'Lost connection to MySQL server during query')\n", '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/sqlalchemy/engines.py", line 74, in _connect_ping_listener\n    connection.scalar(select(1))\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1262, in scalar\n    return self.execute(object_, *multiparams, **params).scalar()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1380, in execute\n    return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/sql/elements.py", line 334, in _execute_on_connection\n    return connection._execute_clauseelement(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1572, in _execute_clauseelement\n    ret = self._execute_context(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1943, in _execute_context\n    self._handle_dbapi_exception(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2122, in _handle_dbapi_exception\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context\n    self.dialect.do_execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute\n    cursor.execute(statement, parameters)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 163, in execute\n    result = self._query(query)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 321, in _query\n    conn.query(q)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 505, in query\n    self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 724, in _read_query_result\n    result.read()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 1069, in read\n    first_packet = self.connection._read_packet()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 646, in _read_packet\n    packet_header = self._read_bytes(4)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 698, in _read_bytes\n    raise err.OperationalError(\n', "oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query')\n[SQL: SELECT 1]\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n", '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1798, in _execute_context\n    conn = self._revalidate_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 646, in _revalidate_connection\n    self._dbapi_connection = self.engine.raw_connection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3368, in _wrap_pool_connect\n    util.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 120, in __init__\n    self.dispatch.engine_connect(self, _branch_from is not None)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/event/attr.py", line 334, in __call__\n    fn(*args, **kw)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/sqlalchemy/engines.py", line 84, in _connect_ping_listener\n    connection.scalar(select(1))\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1262, in scalar\n    return self.execute(object_, *multiparams, **params).scalar()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1380, in execute\n    return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/sql/elements.py", line 334, in _execute_on_connection\n    return connection._execute_clauseelement(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1572, in _execute_clauseelement\n    ret = self._execute_context(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1806, in _execute_context\n    self._handle_dbapi_exception(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2122, in _handle_dbapi_exception\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1798, in _execute_context\n    conn = self._revalidate_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 646, in _revalidate_connection\n    self._dbapi_connection = self.engine.raw_connection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3368, in _wrap_pool_connect\n    util.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._in
Feb 19 20:37:42 compute-0 nova_compute[188777]: voke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n[SQL: SELECT 1]\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Feb 19 20:37:42 compute-0 nova_compute[188777]: 2026-02-19 20:37:42.438 188781 ERROR nova.servicegroup.drivers.db 
Feb 19 20:37:42 compute-0 rsyslogd[239379]: message too long (15362) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-pack [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 19 20:37:42 compute-0 rsyslogd[239379]: message too long (14504) with configured size 8096, begin of message is: 2026-02-19 20:37:42.438 188781 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 19 20:37:43 compute-0 nova_compute[188777]: 2026-02-19 20:37:43.785 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:37:44 compute-0 ovn_controller[98843]: 2026-02-19T20:37:44Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c6:08:9f 10.100.0.13
Feb 19 20:37:44 compute-0 nova_compute[188777]: 2026-02-19 20:37:44.788 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:37:46 compute-0 nova_compute[188777]: 2026-02-19 20:37:46.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:37:46 compute-0 nova_compute[188777]: 2026-02-19 20:37:46.265 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:37:48 compute-0 nova_compute[188777]: 2026-02-19 20:37:48.788 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:37:49 compute-0 nova_compute[188777]: 2026-02-19 20:37:49.792 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:37:52 compute-0 nova_compute[188777]: 2026-02-19 20:37:52.372 188781 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Feb 19 20:37:52 compute-0 nova_compute[188777]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Feb 19 20:37:52 compute-0 nova_compute[188777]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Feb 19 20:37:52 compute-0 nova_compute[188777]: 2026-02-19 20:37:52.372 188781 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Feb 19 20:37:52 compute-0 nova_compute[188777]: 2026-02-19 20:37:52.372 188781 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Feb 19 20:37:52 compute-0 nova_compute[188777]: 2026-02-19 20:37:52.372 188781 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Feb 19 20:37:52 compute-0 nova_compute[188777]: 2026-02-19 20:37:52.372 188781 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Feb 19 20:37:52 compute-0 nova_compute[188777]: 2026-02-19 20:37:52.372 188781 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Feb 19 20:37:52 compute-0 nova_compute[188777]: 2026-02-19 20:37:52.372 188781 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Feb 19 20:37:52 compute-0 nova_compute[188777]: 2026-02-19 20:37:52.372 188781 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Feb 19 20:37:52 compute-0 nova_compute[188777]: 2026-02-19 20:37:52.372 188781 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Feb 19 20:37:52 compute-0 nova_compute[188777]: 2026-02-19 20:37:52.372 188781 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Feb 19 20:37:52 compute-0 nova_compute[188777]: 2026-02-19 20:37:52.372 188781 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Feb 19 20:37:52 compute-0 nova_compute[188777]: 2026-02-19 20:37:52.372 188781 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Feb 19 20:37:52 compute-0 nova_compute[188777]: 2026-02-19 20:37:52.372 188781 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Feb 19 20:37:52 compute-0 nova_compute[188777]: 2026-02-19 20:37:52.372 188781 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Feb 19 20:37:52 compute-0 nova_compute[188777]: 2026-02-19 20:37:52.372 188781 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Feb 19 20:37:52 compute-0 nova_compute[188777]: 2026-02-19 20:37:52.372 188781 ERROR nova.servicegroup.drivers.db     raise result
Feb 19 20:37:52 compute-0 nova_compute[188777]: 2026-02-19 20:37:52.372 188781 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Feb 19 20:37:52 compute-0 nova_compute[188777]: 2026-02-19 20:37:52.372 188781 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Feb 19 20:37:52 compute-0 nova_compute[188777]: 2026-02-19 20:37:52.372 188781 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Feb 19 20:37:52 compute-0 nova_compute[188777]: 2026-02-19 20:37:52.372 188781 ERROR nova.servicegroup.drivers.db 
Feb 19 20:37:52 compute-0 rsyslogd[239379]: message too long (8986) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 19 20:37:52 compute-0 rsyslogd[239379]: message too long (9052) with configured size 8096, begin of message is: 2026-02-19 20:37:52.372 188781 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 19 20:37:53 compute-0 nova_compute[188777]: 2026-02-19 20:37:53.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:37:53 compute-0 nova_compute[188777]: 2026-02-19 20:37:53.790 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:37:54 compute-0 nova_compute[188777]: 2026-02-19 20:37:54.266 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:37:54 compute-0 podman[254467]: 2026-02-19 20:37:54.399545398 +0000 UTC m=+0.089114402 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, version=9.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, release=1770267347, io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 19 20:37:54 compute-0 podman[254468]: 2026-02-19 20:37:54.408450064 +0000 UTC m=+0.093936791 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 19 20:37:54 compute-0 nova_compute[188777]: 2026-02-19 20:37:54.796 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:37:57 compute-0 nova_compute[188777]: 2026-02-19 20:37:57.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:37:57 compute-0 podman[254509]: 2026-02-19 20:37:57.438575194 +0000 UTC m=+0.106940015 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb 19 20:37:57 compute-0 nova_compute[188777]: 2026-02-19 20:37:57.468 188781 ERROR oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Error during ComputeManager.update_available_resource: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Feb 19 20:37:57 compute-0 nova_compute[188777]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Feb 19 20:37:57 compute-0 nova_compute[188777]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/compute_node.py", line 485, in get_all_by_host\n    db_computes = cls._db_compute_node_get_all_by_host(context, host,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/compute_node.py", line 481, in _db_compute_node_get_all_by_host\n    return db.compute_node_get_all_by_host(context, host)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 738, in compute_node_get_all_by_host\n    results = _compute_node_fetchall(context, {"host": host})\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 616, in _compute_node_fetchall\n    with engine.connect() as conn, conn.begin():\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Feb 19 20:37:57 compute-0 nova_compute[188777]: 2026-02-19 20:37:57.468 188781 ERROR oslo_service.periodic_task Traceback (most recent call last):
Feb 19 20:37:57 compute-0 nova_compute[188777]: 2026-02-19 20:37:57.468 188781 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_service/periodic_task.py", line 216, in run_periodic_tasks
Feb 19 20:37:57 compute-0 nova_compute[188777]: 2026-02-19 20:37:57.468 188781 ERROR oslo_service.periodic_task     task(self, context)
Feb 19 20:37:57 compute-0 nova_compute[188777]: 2026-02-19 20:37:57.468 188781 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 10584, in update_available_resource
Feb 19 20:37:57 compute-0 nova_compute[188777]: 2026-02-19 20:37:57.468 188781 ERROR oslo_service.periodic_task     compute_nodes_in_db = self._get_compute_nodes_in_db(context,
Feb 19 20:37:57 compute-0 nova_compute[188777]: 2026-02-19 20:37:57.468 188781 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 10631, in _get_compute_nodes_in_db
Feb 19 20:37:57 compute-0 nova_compute[188777]: 2026-02-19 20:37:57.468 188781 ERROR oslo_service.periodic_task     return objects.ComputeNodeList.get_all_by_host(context, self.host,
Feb 19 20:37:57 compute-0 nova_compute[188777]: 2026-02-19 20:37:57.468 188781 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 175, in wrapper
Feb 19 20:37:57 compute-0 nova_compute[188777]: 2026-02-19 20:37:57.468 188781 ERROR oslo_service.periodic_task     result = cls.indirection_api.object_class_action_versions(
Feb 19 20:37:57 compute-0 nova_compute[188777]: 2026-02-19 20:37:57.468 188781 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 240, in object_class_action_versions
Feb 19 20:37:57 compute-0 nova_compute[188777]: 2026-02-19 20:37:57.468 188781 ERROR oslo_service.periodic_task     return cctxt.call(context, 'object_class_action_versions',
Feb 19 20:37:57 compute-0 nova_compute[188777]: 2026-02-19 20:37:57.468 188781 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Feb 19 20:37:57 compute-0 nova_compute[188777]: 2026-02-19 20:37:57.468 188781 ERROR oslo_service.periodic_task     result = self.transport._send(
Feb 19 20:37:57 compute-0 nova_compute[188777]: 2026-02-19 20:37:57.468 188781 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Feb 19 20:37:57 compute-0 nova_compute[188777]: 2026-02-19 20:37:57.468 188781 ERROR oslo_service.periodic_task     return self._driver.send(target, ctxt, message,
Feb 19 20:37:57 compute-0 nova_compute[188777]: 2026-02-19 20:37:57.468 188781 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Feb 19 20:37:57 compute-0 nova_compute[188777]: 2026-02-19 20:37:57.468 188781 ERROR oslo_service.periodic_task     return self._send(target, ctxt, message, wait_for_reply, timeout,
Feb 19 20:37:57 compute-0 nova_compute[188777]: 2026-02-19 20:37:57.468 188781 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Feb 19 20:37:57 compute-0 nova_compute[188777]: 2026-02-19 20:37:57.468 188781 ERROR oslo_service.periodic_task     raise result
Feb 19 20:37:57 compute-0 nova_compute[188777]: 2026-02-19 20:37:57.468 188781 ERROR oslo_service.periodic_task oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Feb 19 20:37:57 compute-0 nova_compute[188777]: 2026-02-19 20:37:57.468 188781 ERROR oslo_service.periodic_task (Background on this error at: https://sqlalche.me/e/14/e3q8)
Feb 19 20:37:57 compute-0 nova_compute[188777]: 2026-02-19 20:37:57.468 188781 ERROR oslo_service.periodic_task ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/compute_node.py", line 485, in get_all_by_host\n    db_computes = cls._db_compute_node_get_all_by_host(context, host,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/compute_node.py", line 481, in _db_compute_node_get_all_by_host\n    return db.compute_node_get_all_by_host(context, host)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 738, in compute_node_get_all_by_host\n    results = _compute_node_fetchall(context, {"host": host})\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 616, in _compute_node_fetchall\n    with engine.connect() as conn, conn.begin():\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Feb 19 20:37:57 compute-0 nova_compute[188777]: 2026-02-19 20:37:57.468 188781 ERROR oslo_service.periodic_task 
Feb 19 20:37:57 compute-0 rsyslogd[239379]: message too long (8132) with configured size 8096, begin of message is: 2026-02-19 20:37:57.468 188781 ERROR oslo_service.periodic_task ['Traceback (mos [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 19 20:37:58 compute-0 nova_compute[188777]: 2026-02-19 20:37:58.469 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:37:58 compute-0 nova_compute[188777]: 2026-02-19 20:37:58.792 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:37:59 compute-0 podman[204724]: time="2026-02-19T20:37:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:37:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:37:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32938 "" "Go-http-client/1.1"
Feb 19 20:37:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:37:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5776 "" "Go-http-client/1.1"
Feb 19 20:37:59 compute-0 nova_compute[188777]: 2026-02-19 20:37:59.799 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:38:00 compute-0 nova_compute[188777]: 2026-02-19 20:38:00.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:38:00 compute-0 nova_compute[188777]: 2026-02-19 20:38:00.265 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:38:00 compute-0 nova_compute[188777]: 2026-02-19 20:38:00.311 188781 ERROR oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Error during ComputeManager._heal_instance_info_cache: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Feb 19 20:38:00 compute-0 nova_compute[188777]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Feb 19 20:38:00 compute-0 nova_compute[188777]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 525, in get_by_uuid\n    db_inst = cls._db_instance_get_by_uuid(context, uuid, columns_to_join,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 517, in _db_instance_get_by_uuid\n    return db.instance_get_by_uuid(context, uuid,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/utils.py", line 35, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 1395, in instance_get_by_uuid\n    return _instance_get_by_uuid(context, uuid,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 1400, in _instance_get_by_uuid\n    result = _build_instance_get(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Feb 19 20:38:00 compute-0 nova_compute[188777]: 2026-02-19 20:38:00.311 188781 ERROR oslo_service.periodic_task Traceback (most recent call last):
Feb 19 20:38:00 compute-0 nova_compute[188777]: 2026-02-19 20:38:00.311 188781 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_service/periodic_task.py", line 216, in run_periodic_tasks
Feb 19 20:38:00 compute-0 nova_compute[188777]: 2026-02-19 20:38:00.311 188781 ERROR oslo_service.periodic_task     task(self, context)
Feb 19 20:38:00 compute-0 nova_compute[188777]: 2026-02-19 20:38:00.311 188781 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 9891, in _heal_instance_info_cache
Feb 19 20:38:00 compute-0 nova_compute[188777]: 2026-02-19 20:38:00.311 188781 ERROR oslo_service.periodic_task     inst = objects.Instance.get_by_uuid(
Feb 19 20:38:00 compute-0 nova_compute[188777]: 2026-02-19 20:38:00.311 188781 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 175, in wrapper
Feb 19 20:38:00 compute-0 nova_compute[188777]: 2026-02-19 20:38:00.311 188781 ERROR oslo_service.periodic_task     result = cls.indirection_api.object_class_action_versions(
Feb 19 20:38:00 compute-0 nova_compute[188777]: 2026-02-19 20:38:00.311 188781 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 240, in object_class_action_versions
Feb 19 20:38:00 compute-0 nova_compute[188777]: 2026-02-19 20:38:00.311 188781 ERROR oslo_service.periodic_task     return cctxt.call(context, 'object_class_action_versions',
Feb 19 20:38:00 compute-0 nova_compute[188777]: 2026-02-19 20:38:00.311 188781 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Feb 19 20:38:00 compute-0 nova_compute[188777]: 2026-02-19 20:38:00.311 188781 ERROR oslo_service.periodic_task     result = self.transport._send(
Feb 19 20:38:00 compute-0 nova_compute[188777]: 2026-02-19 20:38:00.311 188781 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Feb 19 20:38:00 compute-0 nova_compute[188777]: 2026-02-19 20:38:00.311 188781 ERROR oslo_service.periodic_task     return self._driver.send(target, ctxt, message,
Feb 19 20:38:00 compute-0 nova_compute[188777]: 2026-02-19 20:38:00.311 188781 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Feb 19 20:38:00 compute-0 nova_compute[188777]: 2026-02-19 20:38:00.311 188781 ERROR oslo_service.periodic_task     return self._send(target, ctxt, message, wait_for_reply, timeout,
Feb 19 20:38:00 compute-0 nova_compute[188777]: 2026-02-19 20:38:00.311 188781 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Feb 19 20:38:00 compute-0 nova_compute[188777]: 2026-02-19 20:38:00.311 188781 ERROR oslo_service.periodic_task     raise result
Feb 19 20:38:00 compute-0 nova_compute[188777]: 2026-02-19 20:38:00.311 188781 ERROR oslo_service.periodic_task oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Feb 19 20:38:00 compute-0 nova_compute[188777]: 2026-02-19 20:38:00.311 188781 ERROR oslo_service.periodic_task (Background on this error at: https://sqlalche.me/e/14/e3q8)
Feb 19 20:38:00 compute-0 nova_compute[188777]: 2026-02-19 20:38:00.311 188781 ERROR oslo_service.periodic_task ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 525, in get_by_uuid\n    db_inst = cls._db_instance_get_by_uuid(context, uuid, columns_to_join,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 517, in _db_instance_get_by_uuid\n    return db.instance_get_by_uuid(context, uuid,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/utils.py", line 35, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 1395, in instance_get_by_uuid\n    return _instance_get_by_uuid(context, uuid,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 1400, in _instance_get_by_uuid\n    result = _build_instance_get(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Feb 19 20:38:00 compute-0 nova_compute[188777]: 2026-02-19 20:38:00.311 188781 ERROR oslo_service.periodic_task 
Feb 19 20:38:00 compute-0 nova_compute[188777]: 2026-02-19 20:38:00.313 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:38:00 compute-0 rsyslogd[239379]: message too long (8833) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 19 20:38:00 compute-0 rsyslogd[239379]: message too long (8897) with configured size 8096, begin of message is: 2026-02-19 20:38:00.311 188781 ERROR oslo_service.periodic_task ['Traceback (mos [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 19 20:38:01 compute-0 nova_compute[188777]: 2026-02-19 20:38:01.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:38:01 compute-0 openstack_network_exporter[207898]: ERROR   20:38:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:38:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:38:01 compute-0 openstack_network_exporter[207898]: ERROR   20:38:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:38:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:38:02 compute-0 nova_compute[188777]: 2026-02-19 20:38:02.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:38:02 compute-0 podman[254529]: 2026-02-19 20:38:02.392089105 +0000 UTC m=+0.071200064 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Feb 19 20:38:02 compute-0 podman[254528]: 2026-02-19 20:38:02.409661231 +0000 UTC m=+0.093175276 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., architecture=x86_64, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, container_name=kepler, release=1214.1726694543, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, config_id=kepler, distribution-scope=public)
Feb 19 20:38:02 compute-0 nova_compute[188777]: 2026-02-19 20:38:02.448 188781 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Feb 19 20:38:02 compute-0 nova_compute[188777]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Feb 19 20:38:02 compute-0 nova_compute[188777]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Feb 19 20:38:02 compute-0 nova_compute[188777]: 2026-02-19 20:38:02.448 188781 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Feb 19 20:38:02 compute-0 nova_compute[188777]: 2026-02-19 20:38:02.448 188781 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Feb 19 20:38:02 compute-0 nova_compute[188777]: 2026-02-19 20:38:02.448 188781 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Feb 19 20:38:02 compute-0 nova_compute[188777]: 2026-02-19 20:38:02.448 188781 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Feb 19 20:38:02 compute-0 nova_compute[188777]: 2026-02-19 20:38:02.448 188781 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Feb 19 20:38:02 compute-0 nova_compute[188777]: 2026-02-19 20:38:02.448 188781 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Feb 19 20:38:02 compute-0 nova_compute[188777]: 2026-02-19 20:38:02.448 188781 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Feb 19 20:38:02 compute-0 nova_compute[188777]: 2026-02-19 20:38:02.448 188781 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Feb 19 20:38:02 compute-0 nova_compute[188777]: 2026-02-19 20:38:02.448 188781 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Feb 19 20:38:02 compute-0 nova_compute[188777]: 2026-02-19 20:38:02.448 188781 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Feb 19 20:38:02 compute-0 nova_compute[188777]: 2026-02-19 20:38:02.448 188781 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Feb 19 20:38:02 compute-0 nova_compute[188777]: 2026-02-19 20:38:02.448 188781 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Feb 19 20:38:02 compute-0 nova_compute[188777]: 2026-02-19 20:38:02.448 188781 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Feb 19 20:38:02 compute-0 nova_compute[188777]: 2026-02-19 20:38:02.448 188781 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Feb 19 20:38:02 compute-0 nova_compute[188777]: 2026-02-19 20:38:02.448 188781 ERROR nova.servicegroup.drivers.db     raise result
Feb 19 20:38:02 compute-0 nova_compute[188777]: 2026-02-19 20:38:02.448 188781 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Feb 19 20:38:02 compute-0 nova_compute[188777]: 2026-02-19 20:38:02.448 188781 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Feb 19 20:38:02 compute-0 nova_compute[188777]: 2026-02-19 20:38:02.448 188781 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Feb 19 20:38:02 compute-0 nova_compute[188777]: 2026-02-19 20:38:02.448 188781 ERROR nova.servicegroup.drivers.db 
Feb 19 20:38:02 compute-0 rsyslogd[239379]: message too long (8986) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 19 20:38:02 compute-0 rsyslogd[239379]: message too long (9052) with configured size 8096, begin of message is: 2026-02-19 20:38:02.448 188781 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 19 20:38:02 compute-0 ovn_controller[98843]: 2026-02-19T20:38:02Z|00134|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Feb 19 20:38:03 compute-0 nova_compute[188777]: 2026-02-19 20:38:03.794 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:38:04 compute-0 nova_compute[188777]: 2026-02-19 20:38:04.803 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:38:06 compute-0 podman[254566]: 2026-02-19 20:38:06.406636353 +0000 UTC m=+0.089415531 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 19 20:38:08 compute-0 podman[254589]: 2026-02-19 20:38:08.434070327 +0000 UTC m=+0.113123987 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260216, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0)
Feb 19 20:38:08 compute-0 nova_compute[188777]: 2026-02-19 20:38:08.795 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:38:09 compute-0 nova_compute[188777]: 2026-02-19 20:38:09.806 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:38:10 compute-0 podman[254610]: 2026-02-19 20:38:10.403573412 +0000 UTC m=+0.091085722 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 20:38:12 compute-0 nova_compute[188777]: 2026-02-19 20:38:12.396 188781 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Feb 19 20:38:12 compute-0 nova_compute[188777]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Feb 19 20:38:12 compute-0 nova_compute[188777]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Feb 19 20:38:12 compute-0 nova_compute[188777]: 2026-02-19 20:38:12.396 188781 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Feb 19 20:38:12 compute-0 nova_compute[188777]: 2026-02-19 20:38:12.396 188781 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Feb 19 20:38:12 compute-0 nova_compute[188777]: 2026-02-19 20:38:12.396 188781 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Feb 19 20:38:12 compute-0 nova_compute[188777]: 2026-02-19 20:38:12.396 188781 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Feb 19 20:38:12 compute-0 nova_compute[188777]: 2026-02-19 20:38:12.396 188781 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Feb 19 20:38:12 compute-0 nova_compute[188777]: 2026-02-19 20:38:12.396 188781 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Feb 19 20:38:12 compute-0 nova_compute[188777]: 2026-02-19 20:38:12.396 188781 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Feb 19 20:38:12 compute-0 nova_compute[188777]: 2026-02-19 20:38:12.396 188781 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Feb 19 20:38:12 compute-0 nova_compute[188777]: 2026-02-19 20:38:12.396 188781 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Feb 19 20:38:12 compute-0 nova_compute[188777]: 2026-02-19 20:38:12.396 188781 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Feb 19 20:38:12 compute-0 nova_compute[188777]: 2026-02-19 20:38:12.396 188781 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Feb 19 20:38:12 compute-0 nova_compute[188777]: 2026-02-19 20:38:12.396 188781 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Feb 19 20:38:12 compute-0 nova_compute[188777]: 2026-02-19 20:38:12.396 188781 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Feb 19 20:38:12 compute-0 nova_compute[188777]: 2026-02-19 20:38:12.396 188781 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Feb 19 20:38:12 compute-0 nova_compute[188777]: 2026-02-19 20:38:12.396 188781 ERROR nova.servicegroup.drivers.db     raise result
Feb 19 20:38:12 compute-0 nova_compute[188777]: 2026-02-19 20:38:12.396 188781 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Feb 19 20:38:12 compute-0 nova_compute[188777]: 2026-02-19 20:38:12.396 188781 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Feb 19 20:38:12 compute-0 nova_compute[188777]: 2026-02-19 20:38:12.396 188781 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Feb 19 20:38:12 compute-0 nova_compute[188777]: 2026-02-19 20:38:12.396 188781 ERROR nova.servicegroup.drivers.db 
Feb 19 20:38:12 compute-0 rsyslogd[239379]: message too long (8986) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 19 20:38:12 compute-0 rsyslogd[239379]: message too long (9052) with configured size 8096, begin of message is: 2026-02-19 20:38:12.396 188781 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 19 20:38:13 compute-0 nova_compute[188777]: 2026-02-19 20:38:13.797 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:38:14 compute-0 nova_compute[188777]: 2026-02-19 20:38:14.811 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:38:15 compute-0 sshd-session[254637]: Invalid user titu from 103.250.11.249 port 56680
Feb 19 20:38:16 compute-0 sshd-session[254637]: Received disconnect from 103.250.11.249 port 56680:11: Bye Bye [preauth]
Feb 19 20:38:16 compute-0 sshd-session[254637]: Disconnected from invalid user titu 103.250.11.249 port 56680 [preauth]
Feb 19 20:38:18 compute-0 nova_compute[188777]: 2026-02-19 20:38:18.799 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:38:19 compute-0 nova_compute[188777]: 2026-02-19 20:38:19.815 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:38:22 compute-0 nova_compute[188777]: 2026-02-19 20:38:22.413 188781 INFO nova.servicegroup.drivers.db [-] Recovered from being unable to report status.
Feb 19 20:38:22 compute-0 nova_compute[188777]: 2026-02-19 20:38:22.526 188781 DEBUG oslo_concurrency.lockutils [None req-2e66b409-62c6-4b51-a6e2-f1f4bb54d7e6 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Acquiring lock "da31f324-38ad-4f77-b724-3ef1628be336" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:38:22 compute-0 nova_compute[188777]: 2026-02-19 20:38:22.528 188781 DEBUG oslo_concurrency.lockutils [None req-2e66b409-62c6-4b51-a6e2-f1f4bb54d7e6 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Lock "da31f324-38ad-4f77-b724-3ef1628be336" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:38:22 compute-0 nova_compute[188777]: 2026-02-19 20:38:22.528 188781 DEBUG oslo_concurrency.lockutils [None req-2e66b409-62c6-4b51-a6e2-f1f4bb54d7e6 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Acquiring lock "da31f324-38ad-4f77-b724-3ef1628be336-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:38:22 compute-0 nova_compute[188777]: 2026-02-19 20:38:22.529 188781 DEBUG oslo_concurrency.lockutils [None req-2e66b409-62c6-4b51-a6e2-f1f4bb54d7e6 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Lock "da31f324-38ad-4f77-b724-3ef1628be336-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:38:22 compute-0 nova_compute[188777]: 2026-02-19 20:38:22.530 188781 DEBUG oslo_concurrency.lockutils [None req-2e66b409-62c6-4b51-a6e2-f1f4bb54d7e6 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Lock "da31f324-38ad-4f77-b724-3ef1628be336-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:38:22 compute-0 nova_compute[188777]: 2026-02-19 20:38:22.531 188781 INFO nova.compute.manager [None req-2e66b409-62c6-4b51-a6e2-f1f4bb54d7e6 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Terminating instance
Feb 19 20:38:22 compute-0 nova_compute[188777]: 2026-02-19 20:38:22.533 188781 DEBUG nova.compute.manager [None req-2e66b409-62c6-4b51-a6e2-f1f4bb54d7e6 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 19 20:38:22 compute-0 kernel: tapb9a6ef82-e3 (unregistering): left promiscuous mode
Feb 19 20:38:22 compute-0 NetworkManager[57033]: <info>  [1771533502.5842] device (tapb9a6ef82-e3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 19 20:38:22 compute-0 ovn_controller[98843]: 2026-02-19T20:38:22Z|00135|binding|INFO|Releasing lport b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 from this chassis (sb_readonly=0)
Feb 19 20:38:22 compute-0 ovn_controller[98843]: 2026-02-19T20:38:22Z|00136|binding|INFO|Setting lport b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 down in Southbound
Feb 19 20:38:22 compute-0 nova_compute[188777]: 2026-02-19 20:38:22.591 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:38:22 compute-0 ovn_controller[98843]: 2026-02-19T20:38:22Z|00137|binding|INFO|Removing iface tapb9a6ef82-e3 ovn-installed in OVS
Feb 19 20:38:22 compute-0 nova_compute[188777]: 2026-02-19 20:38:22.595 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:38:22 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:38:22.603 108175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:08:9f 10.100.0.13'], port_security=['fa:16:3e:c6:08:9f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'da31f324-38ad-4f77-b724-3ef1628be336', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d02e853c-7c37-4c12-a959-0da0ff097734', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3c8b3e035bb347acad9c4027457ee296', 'neutron:revision_number': '6', 'neutron:security_group_ids': '745eb45a-1fad-4b86-be2d-ed9c647c807b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.241'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5262953e-25bb-44de-850c-ced354d0d447, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>], logical_port=b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 19 20:38:22 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:38:22.605 108175 INFO neutron.agent.ovn.metadata.agent [-] Port b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 in datapath d02e853c-7c37-4c12-a959-0da0ff097734 unbound from our chassis
Feb 19 20:38:22 compute-0 nova_compute[188777]: 2026-02-19 20:38:22.606 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:38:22 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:38:22.607 108175 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d02e853c-7c37-4c12-a959-0da0ff097734, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 19 20:38:22 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:38:22.609 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[66592136-96d6-46ae-aaa9-8d0e01dab4a1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:38:22 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:38:22.610 108175 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734 namespace which is not needed anymore
Feb 19 20:38:22 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d00000007.scope: Deactivated successfully.
Feb 19 20:38:22 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d00000007.scope: Consumed 42.605s CPU time.
Feb 19 20:38:22 compute-0 systemd-machined[158158]: Machine qemu-13-instance-00000007 terminated.
Feb 19 20:38:22 compute-0 nova_compute[188777]: 2026-02-19 20:38:22.799 188781 INFO nova.virt.libvirt.driver [-] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Instance destroyed successfully.
Feb 19 20:38:22 compute-0 nova_compute[188777]: 2026-02-19 20:38:22.800 188781 DEBUG nova.objects.instance [None req-2e66b409-62c6-4b51-a6e2-f1f4bb54d7e6 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Lazy-loading 'resources' on Instance uuid da31f324-38ad-4f77-b724-3ef1628be336 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:38:22 compute-0 neutron-haproxy-ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734[254234]: [NOTICE]   (254238) : haproxy version is 2.8.14-c23fe91
Feb 19 20:38:22 compute-0 neutron-haproxy-ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734[254234]: [NOTICE]   (254238) : path to executable is /usr/sbin/haproxy
Feb 19 20:38:22 compute-0 neutron-haproxy-ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734[254234]: [WARNING]  (254238) : Exiting Master process...
Feb 19 20:38:22 compute-0 neutron-haproxy-ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734[254234]: [WARNING]  (254238) : Exiting Master process...
Feb 19 20:38:22 compute-0 neutron-haproxy-ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734[254234]: [ALERT]    (254238) : Current worker (254240) exited with code 143 (Terminated)
Feb 19 20:38:22 compute-0 neutron-haproxy-ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734[254234]: [WARNING]  (254238) : All workers exited. Exiting... (0)
Feb 19 20:38:22 compute-0 systemd[1]: libpod-49f8e4227c05fa2a6f8598820d2b3046cbffa83414ff9eecc79f5645fc23d58d.scope: Deactivated successfully.
Feb 19 20:38:22 compute-0 podman[254663]: 2026-02-19 20:38:22.83036032 +0000 UTC m=+0.077627424 container died 49f8e4227c05fa2a6f8598820d2b3046cbffa83414ff9eecc79f5645fc23d58d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 20:38:22 compute-0 nova_compute[188777]: 2026-02-19 20:38:22.835 188781 DEBUG nova.virt.libvirt.vif [None req-2e66b409-62c6-4b51-a6e2-f1f4bb54d7e6 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-19T20:35:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-541687296',display_name='tempest-ServerActionsTestJSON-server-541687296',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-541687296',id=7,image_ref='17b9bce8-a91b-495d-ac33-cf63893413f9',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBGUXfqYgLpnaK0US8EwHfzCPv+m8vpQ+fPWU8q/hF6l9cNu9x6P14aljSv28A+SD7n7yEsgSHzHQXS8tsguQqzUZEu4v3AxpVAXh2tIOAWxaA3uNPd6KcWlT+WQySBOhg==',key_name='tempest-keypair-175997513',keypairs=<?>,launch_index=0,launched_at=2026-02-19T20:35:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3c8b3e035bb347acad9c4027457ee296',ramdisk_id='',reservation_id='r-a1krnbi0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='17b9bce8-a91b-495d-ac33-cf63893413f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1818290169',owner_user_name='tempest-ServerActionsTestJSON-1818290169-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-19T20:37:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='43931603bc9f40eab8e548129d4c50cb',uuid=da31f324-38ad-4f77-b724-3ef1628be336,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2", "address": "fa:16:3e:c6:08:9f", "network": {"id": "d02e853c-7c37-4c12-a959-0da0ff097734", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-432434488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c8b3e035bb347acad9c4027457ee296", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9a6ef82-e3", "ovs_interfaceid": "b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 19 20:38:22 compute-0 nova_compute[188777]: 2026-02-19 20:38:22.836 188781 DEBUG nova.network.os_vif_util [None req-2e66b409-62c6-4b51-a6e2-f1f4bb54d7e6 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Converting VIF {"id": "b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2", "address": "fa:16:3e:c6:08:9f", "network": {"id": "d02e853c-7c37-4c12-a959-0da0ff097734", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-432434488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c8b3e035bb347acad9c4027457ee296", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9a6ef82-e3", "ovs_interfaceid": "b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 19 20:38:22 compute-0 nova_compute[188777]: 2026-02-19 20:38:22.837 188781 DEBUG nova.network.os_vif_util [None req-2e66b409-62c6-4b51-a6e2-f1f4bb54d7e6 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c6:08:9f,bridge_name='br-int',has_traffic_filtering=True,id=b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2,network=Network(d02e853c-7c37-4c12-a959-0da0ff097734),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9a6ef82-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 19 20:38:22 compute-0 nova_compute[188777]: 2026-02-19 20:38:22.837 188781 DEBUG os_vif [None req-2e66b409-62c6-4b51-a6e2-f1f4bb54d7e6 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c6:08:9f,bridge_name='br-int',has_traffic_filtering=True,id=b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2,network=Network(d02e853c-7c37-4c12-a959-0da0ff097734),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9a6ef82-e3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 19 20:38:22 compute-0 nova_compute[188777]: 2026-02-19 20:38:22.838 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:38:22 compute-0 nova_compute[188777]: 2026-02-19 20:38:22.839 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb9a6ef82-e3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:38:22 compute-0 nova_compute[188777]: 2026-02-19 20:38:22.840 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:38:22 compute-0 nova_compute[188777]: 2026-02-19 20:38:22.845 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 19 20:38:22 compute-0 nova_compute[188777]: 2026-02-19 20:38:22.847 188781 INFO os_vif [None req-2e66b409-62c6-4b51-a6e2-f1f4bb54d7e6 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c6:08:9f,bridge_name='br-int',has_traffic_filtering=True,id=b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2,network=Network(d02e853c-7c37-4c12-a959-0da0ff097734),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9a6ef82-e3')
Feb 19 20:38:22 compute-0 nova_compute[188777]: 2026-02-19 20:38:22.847 188781 INFO nova.virt.libvirt.driver [None req-2e66b409-62c6-4b51-a6e2-f1f4bb54d7e6 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Deleting instance files /var/lib/nova/instances/da31f324-38ad-4f77-b724-3ef1628be336_del
Feb 19 20:38:22 compute-0 nova_compute[188777]: 2026-02-19 20:38:22.848 188781 INFO nova.virt.libvirt.driver [None req-2e66b409-62c6-4b51-a6e2-f1f4bb54d7e6 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Deletion of /var/lib/nova/instances/da31f324-38ad-4f77-b724-3ef1628be336_del complete
Feb 19 20:38:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-49f8e4227c05fa2a6f8598820d2b3046cbffa83414ff9eecc79f5645fc23d58d-userdata-shm.mount: Deactivated successfully.
Feb 19 20:38:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ae0f001d38c83e5f6d95bf56f1ff302b551068f8d53b487dda7b917615cc946-merged.mount: Deactivated successfully.
Feb 19 20:38:22 compute-0 podman[254663]: 2026-02-19 20:38:22.875711819 +0000 UTC m=+0.122978933 container cleanup 49f8e4227c05fa2a6f8598820d2b3046cbffa83414ff9eecc79f5645fc23d58d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 19 20:38:22 compute-0 systemd[1]: libpod-conmon-49f8e4227c05fa2a6f8598820d2b3046cbffa83414ff9eecc79f5645fc23d58d.scope: Deactivated successfully.
Feb 19 20:38:22 compute-0 nova_compute[188777]: 2026-02-19 20:38:22.907 188781 INFO nova.compute.manager [None req-2e66b409-62c6-4b51-a6e2-f1f4bb54d7e6 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Took 0.37 seconds to destroy the instance on the hypervisor.
Feb 19 20:38:22 compute-0 nova_compute[188777]: 2026-02-19 20:38:22.909 188781 DEBUG oslo.service.loopingcall [None req-2e66b409-62c6-4b51-a6e2-f1f4bb54d7e6 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 19 20:38:22 compute-0 nova_compute[188777]: 2026-02-19 20:38:22.909 188781 DEBUG nova.compute.manager [-] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 19 20:38:22 compute-0 nova_compute[188777]: 2026-02-19 20:38:22.910 188781 DEBUG nova.network.neutron [-] [instance: da31f324-38ad-4f77-b724-3ef1628be336] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 19 20:38:22 compute-0 podman[254703]: 2026-02-19 20:38:22.954746456 +0000 UTC m=+0.057825519 container remove 49f8e4227c05fa2a6f8598820d2b3046cbffa83414ff9eecc79f5645fc23d58d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 19 20:38:22 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:38:22.959 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[b0bf7686-1bf5-4300-96b3-5c4f06607f76]: (4, ('Thu Feb 19 08:38:22 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734 (49f8e4227c05fa2a6f8598820d2b3046cbffa83414ff9eecc79f5645fc23d58d)\n49f8e4227c05fa2a6f8598820d2b3046cbffa83414ff9eecc79f5645fc23d58d\nThu Feb 19 08:38:22 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734 (49f8e4227c05fa2a6f8598820d2b3046cbffa83414ff9eecc79f5645fc23d58d)\n49f8e4227c05fa2a6f8598820d2b3046cbffa83414ff9eecc79f5645fc23d58d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:38:22 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:38:22.962 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[0035887c-475a-4d81-bcea-33e75e28e134]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:38:22 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:38:22.963 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd02e853c-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:38:22 compute-0 kernel: tapd02e853c-70: left promiscuous mode
Feb 19 20:38:22 compute-0 nova_compute[188777]: 2026-02-19 20:38:22.965 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:38:22 compute-0 nova_compute[188777]: 2026-02-19 20:38:22.974 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:38:22 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:38:22.977 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[30f5822c-dd95-41bf-b539-785247b241fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:38:22 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:38:22.988 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[2ad271fa-130c-4ab6-a516-a4e1b7ce5aa9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:38:22 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:38:22.990 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[d02d6cd6-7d0a-405b-91a2-d3000a709fc1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:38:23 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:38:23.002 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[f08e192c-e718-4824-b3f4-49360d205f6b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 494138, 'reachable_time': 30334, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254718, 'error': None, 'target': 'ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:38:23 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:38:23.005 108698 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d02e853c-7c37-4c12-a959-0da0ff097734 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 19 20:38:23 compute-0 systemd[1]: run-netns-ovnmeta\x2dd02e853c\x2d7c37\x2d4c12\x2da959\x2d0da0ff097734.mount: Deactivated successfully.
Feb 19 20:38:23 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:38:23.005 108698 DEBUG oslo.privsep.daemon [-] privsep: reply[29aa2b7d-6fef-44c0-8424-b3060d644efc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:38:23 compute-0 nova_compute[188777]: 2026-02-19 20:38:23.802 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:38:25 compute-0 podman[254719]: 2026-02-19 20:38:25.387430667 +0000 UTC m=+0.071858425 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, name=ubi9/ubi-minimal, build-date=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, version=9.7, container_name=openstack_network_exporter, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347)
Feb 19 20:38:25 compute-0 podman[254720]: 2026-02-19 20:38:25.391830193 +0000 UTC m=+0.076693884 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 19 20:38:25 compute-0 nova_compute[188777]: 2026-02-19 20:38:25.422 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:38:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:38:25.423 108175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1e:ad:15', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:0d:ba:1d:25:53'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 19 20:38:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:38:25.423 108175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 19 20:38:26 compute-0 nova_compute[188777]: 2026-02-19 20:38:26.044 188781 DEBUG nova.compute.manager [req-b104f47a-b157-45d7-a740-568da58da71a req-0faa7243-2821-4dcf-9d20-304229b3d777 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Received event network-vif-unplugged-b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:38:26 compute-0 nova_compute[188777]: 2026-02-19 20:38:26.045 188781 DEBUG oslo_concurrency.lockutils [req-b104f47a-b157-45d7-a740-568da58da71a req-0faa7243-2821-4dcf-9d20-304229b3d777 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "da31f324-38ad-4f77-b724-3ef1628be336-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:38:26 compute-0 nova_compute[188777]: 2026-02-19 20:38:26.045 188781 DEBUG oslo_concurrency.lockutils [req-b104f47a-b157-45d7-a740-568da58da71a req-0faa7243-2821-4dcf-9d20-304229b3d777 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "da31f324-38ad-4f77-b724-3ef1628be336-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:38:26 compute-0 nova_compute[188777]: 2026-02-19 20:38:26.045 188781 DEBUG oslo_concurrency.lockutils [req-b104f47a-b157-45d7-a740-568da58da71a req-0faa7243-2821-4dcf-9d20-304229b3d777 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "da31f324-38ad-4f77-b724-3ef1628be336-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:38:26 compute-0 nova_compute[188777]: 2026-02-19 20:38:26.046 188781 DEBUG nova.compute.manager [req-b104f47a-b157-45d7-a740-568da58da71a req-0faa7243-2821-4dcf-9d20-304229b3d777 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] No waiting events found dispatching network-vif-unplugged-b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 19 20:38:26 compute-0 nova_compute[188777]: 2026-02-19 20:38:26.046 188781 DEBUG nova.compute.manager [req-b104f47a-b157-45d7-a740-568da58da71a req-0faa7243-2821-4dcf-9d20-304229b3d777 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Received event network-vif-unplugged-b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 19 20:38:26 compute-0 nova_compute[188777]: 2026-02-19 20:38:26.286 188781 DEBUG nova.network.neutron [-] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:38:26 compute-0 nova_compute[188777]: 2026-02-19 20:38:26.320 188781 INFO nova.compute.manager [-] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Took 3.41 seconds to deallocate network for instance.
Feb 19 20:38:26 compute-0 nova_compute[188777]: 2026-02-19 20:38:26.433 188781 DEBUG oslo_concurrency.lockutils [None req-2e66b409-62c6-4b51-a6e2-f1f4bb54d7e6 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:38:26 compute-0 nova_compute[188777]: 2026-02-19 20:38:26.434 188781 DEBUG oslo_concurrency.lockutils [None req-2e66b409-62c6-4b51-a6e2-f1f4bb54d7e6 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:38:26 compute-0 nova_compute[188777]: 2026-02-19 20:38:26.687 188781 DEBUG nova.compute.manager [req-64599ce9-e4c5-4069-8bf2-485fce57e5a4 req-fbe6484d-3989-4dbb-aa29-d9f9a309cd91 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Received event network-vif-deleted-b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:38:26 compute-0 nova_compute[188777]: 2026-02-19 20:38:26.805 188781 DEBUG nova.compute.provider_tree [None req-2e66b409-62c6-4b51-a6e2-f1f4bb54d7e6 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:38:26 compute-0 nova_compute[188777]: 2026-02-19 20:38:26.824 188781 DEBUG nova.scheduler.client.report [None req-2e66b409-62c6-4b51-a6e2-f1f4bb54d7e6 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:38:26 compute-0 nova_compute[188777]: 2026-02-19 20:38:26.867 188781 DEBUG oslo_concurrency.lockutils [None req-2e66b409-62c6-4b51-a6e2-f1f4bb54d7e6 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.433s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:38:26 compute-0 nova_compute[188777]: 2026-02-19 20:38:26.905 188781 INFO nova.scheduler.client.report [None req-2e66b409-62c6-4b51-a6e2-f1f4bb54d7e6 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Deleted allocations for instance da31f324-38ad-4f77-b724-3ef1628be336
Feb 19 20:38:26 compute-0 nova_compute[188777]: 2026-02-19 20:38:26.980 188781 DEBUG oslo_concurrency.lockutils [None req-2e66b409-62c6-4b51-a6e2-f1f4bb54d7e6 43931603bc9f40eab8e548129d4c50cb 3c8b3e035bb347acad9c4027457ee296 - - default default] Lock "da31f324-38ad-4f77-b724-3ef1628be336" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.453s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:38:27 compute-0 nova_compute[188777]: 2026-02-19 20:38:27.843 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:38:28 compute-0 nova_compute[188777]: 2026-02-19 20:38:28.208 188781 DEBUG nova.compute.manager [req-e8a36665-ca4e-4e18-85b9-dceb231b39af req-3475b03b-fd30-4864-b007-fb297a5c8631 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Received event network-vif-plugged-b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:38:28 compute-0 nova_compute[188777]: 2026-02-19 20:38:28.208 188781 DEBUG oslo_concurrency.lockutils [req-e8a36665-ca4e-4e18-85b9-dceb231b39af req-3475b03b-fd30-4864-b007-fb297a5c8631 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "da31f324-38ad-4f77-b724-3ef1628be336-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:38:28 compute-0 nova_compute[188777]: 2026-02-19 20:38:28.209 188781 DEBUG oslo_concurrency.lockutils [req-e8a36665-ca4e-4e18-85b9-dceb231b39af req-3475b03b-fd30-4864-b007-fb297a5c8631 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "da31f324-38ad-4f77-b724-3ef1628be336-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:38:28 compute-0 nova_compute[188777]: 2026-02-19 20:38:28.209 188781 DEBUG oslo_concurrency.lockutils [req-e8a36665-ca4e-4e18-85b9-dceb231b39af req-3475b03b-fd30-4864-b007-fb297a5c8631 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "da31f324-38ad-4f77-b724-3ef1628be336-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:38:28 compute-0 nova_compute[188777]: 2026-02-19 20:38:28.209 188781 DEBUG nova.compute.manager [req-e8a36665-ca4e-4e18-85b9-dceb231b39af req-3475b03b-fd30-4864-b007-fb297a5c8631 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] No waiting events found dispatching network-vif-plugged-b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 19 20:38:28 compute-0 nova_compute[188777]: 2026-02-19 20:38:28.210 188781 WARNING nova.compute.manager [req-e8a36665-ca4e-4e18-85b9-dceb231b39af req-3475b03b-fd30-4864-b007-fb297a5c8631 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Received unexpected event network-vif-plugged-b9a6ef82-e3db-4716-b9d9-bcdb3e9592f2 for instance with vm_state deleted and task_state None.
Feb 19 20:38:28 compute-0 podman[254761]: 2026-02-19 20:38:28.384440447 +0000 UTC m=+0.073517425 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent)
Feb 19 20:38:28 compute-0 nova_compute[188777]: 2026-02-19 20:38:28.804 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:38:29 compute-0 podman[204724]: time="2026-02-19T20:38:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:38:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:38:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31705 "" "Go-http-client/1.1"
Feb 19 20:38:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:38:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5316 "" "Go-http-client/1.1"
Feb 19 20:38:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:38:30.455 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:38:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:38:30.455 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:38:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:38:30.456 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:38:31 compute-0 openstack_network_exporter[207898]: ERROR   20:38:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:38:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:38:31 compute-0 openstack_network_exporter[207898]: ERROR   20:38:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:38:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:38:32 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:38:32.425 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e2fe6bb6-fad0-4563-8388-215a30f03e3f, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:38:32 compute-0 nova_compute[188777]: 2026-02-19 20:38:32.847 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:38:33 compute-0 nova_compute[188777]: 2026-02-19 20:38:33.832 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:38:33 compute-0 podman[254780]: 2026-02-19 20:38:33.935430609 +0000 UTC m=+0.084570459 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, config_id=kepler, build-date=2024-09-18T21:23:30, distribution-scope=public, managed_by=edpm_ansible, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, io.openshift.expose-services=, release=1214.1726694543, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.buildah.version=1.29.0, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc.)
Feb 19 20:38:33 compute-0 podman[254781]: 2026-02-19 20:38:33.959100155 +0000 UTC m=+0.099586546 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 19 20:38:37 compute-0 ovn_controller[98843]: 2026-02-19T20:38:37Z|00138|binding|INFO|Releasing lport ac510fcf-4783-4f81-b107-f5dac80c5fad from this chassis (sb_readonly=0)
Feb 19 20:38:37 compute-0 ovn_controller[98843]: 2026-02-19T20:38:37Z|00139|binding|INFO|Releasing lport 55b38ec7-c28d-4985-87ac-ac8d24f4e97c from this chassis (sb_readonly=0)
Feb 19 20:38:37 compute-0 ovn_controller[98843]: 2026-02-19T20:38:37Z|00140|binding|INFO|Releasing lport a514a3b0-3622-43cb-93f5-1ce2f2eacb84 from this chassis (sb_readonly=0)
Feb 19 20:38:37 compute-0 nova_compute[188777]: 2026-02-19 20:38:37.063 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:38:37 compute-0 podman[254827]: 2026-02-19 20:38:37.379924408 +0000 UTC m=+0.064620130 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 19 20:38:37 compute-0 nova_compute[188777]: 2026-02-19 20:38:37.788 188781 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1771533502.7874067, da31f324-38ad-4f77-b724-3ef1628be336 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:38:37 compute-0 nova_compute[188777]: 2026-02-19 20:38:37.789 188781 INFO nova.compute.manager [-] [instance: da31f324-38ad-4f77-b724-3ef1628be336] VM Stopped (Lifecycle Event)
Feb 19 20:38:37 compute-0 nova_compute[188777]: 2026-02-19 20:38:37.817 188781 DEBUG nova.compute.manager [None req-d08bcfdd-2d3a-40e0-9be5-b0cda37c9575 - - - - - -] [instance: da31f324-38ad-4f77-b724-3ef1628be336] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:38:37 compute-0 nova_compute[188777]: 2026-02-19 20:38:37.849 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:38:38 compute-0 nova_compute[188777]: 2026-02-19 20:38:38.835 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:38:39 compute-0 podman[254851]: 2026-02-19 20:38:39.406532817 +0000 UTC m=+0.092223517 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Feb 19 20:38:41 compute-0 podman[254868]: 2026-02-19 20:38:41.434610843 +0000 UTC m=+0.121218249 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:38:42 compute-0 nova_compute[188777]: 2026-02-19 20:38:42.853 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:38:43 compute-0 nova_compute[188777]: 2026-02-19 20:38:43.837 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:38:47 compute-0 nova_compute[188777]: 2026-02-19 20:38:47.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:38:47 compute-0 nova_compute[188777]: 2026-02-19 20:38:47.265 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:38:47 compute-0 nova_compute[188777]: 2026-02-19 20:38:47.858 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:38:48 compute-0 nova_compute[188777]: 2026-02-19 20:38:48.840 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:38:49 compute-0 sshd-session[254894]: Invalid user oracle from 103.103.245.7 port 49296
Feb 19 20:38:50 compute-0 sshd-session[254894]: Received disconnect from 103.103.245.7 port 49296:11: Bye Bye [preauth]
Feb 19 20:38:50 compute-0 sshd-session[254894]: Disconnected from invalid user oracle 103.103.245.7 port 49296 [preauth]
Feb 19 20:38:52 compute-0 nova_compute[188777]: 2026-02-19 20:38:52.862 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:38:53 compute-0 nova_compute[188777]: 2026-02-19 20:38:53.843 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:38:54 compute-0 nova_compute[188777]: 2026-02-19 20:38:54.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:38:54 compute-0 nova_compute[188777]: 2026-02-19 20:38:54.266 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:38:56 compute-0 podman[254898]: 2026-02-19 20:38:56.41531318 +0000 UTC m=+0.083868608 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 19 20:38:56 compute-0 podman[254897]: 2026-02-19 20:38:56.439098949 +0000 UTC m=+0.119868247 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_id=openstack_network_exporter, container_name=openstack_network_exporter, managed_by=edpm_ansible, vcs-type=git, build-date=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, maintainer=Red Hat, Inc., architecture=x86_64, name=ubi9/ubi-minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Feb 19 20:38:57 compute-0 nova_compute[188777]: 2026-02-19 20:38:57.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:38:57 compute-0 nova_compute[188777]: 2026-02-19 20:38:57.296 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:38:57 compute-0 nova_compute[188777]: 2026-02-19 20:38:57.297 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:38:57 compute-0 nova_compute[188777]: 2026-02-19 20:38:57.297 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:38:57 compute-0 nova_compute[188777]: 2026-02-19 20:38:57.297 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:38:57 compute-0 nova_compute[188777]: 2026-02-19 20:38:57.388 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:38:57 compute-0 nova_compute[188777]: 2026-02-19 20:38:57.435 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:38:57 compute-0 nova_compute[188777]: 2026-02-19 20:38:57.436 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:38:57 compute-0 nova_compute[188777]: 2026-02-19 20:38:57.482 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:38:57 compute-0 nova_compute[188777]: 2026-02-19 20:38:57.489 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:38:57 compute-0 nova_compute[188777]: 2026-02-19 20:38:57.535 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:38:57 compute-0 nova_compute[188777]: 2026-02-19 20:38:57.536 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:38:57 compute-0 nova_compute[188777]: 2026-02-19 20:38:57.584 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:38:57 compute-0 nova_compute[188777]: 2026-02-19 20:38:57.591 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:38:57 compute-0 nova_compute[188777]: 2026-02-19 20:38:57.638 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:38:57 compute-0 nova_compute[188777]: 2026-02-19 20:38:57.640 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:38:57 compute-0 nova_compute[188777]: 2026-02-19 20:38:57.686 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:38:57 compute-0 nova_compute[188777]: 2026-02-19 20:38:57.866 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:38:58 compute-0 nova_compute[188777]: 2026-02-19 20:38:58.027 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:38:58 compute-0 nova_compute[188777]: 2026-02-19 20:38:58.028 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4820MB free_disk=72.0841064453125GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:38:58 compute-0 nova_compute[188777]: 2026-02-19 20:38:58.029 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:38:58 compute-0 nova_compute[188777]: 2026-02-19 20:38:58.029 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:38:58 compute-0 nova_compute[188777]: 2026-02-19 20:38:58.120 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 997ebdcf-7eab-485b-8fbf-d21112c78946 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:38:58 compute-0 nova_compute[188777]: 2026-02-19 20:38:58.121 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance dff9d513-54f8-4d73-acf7-df610dc4d064 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:38:58 compute-0 nova_compute[188777]: 2026-02-19 20:38:58.121 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 1b6b1397-fda7-4470-883b-1cc5974fac84 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:38:58 compute-0 nova_compute[188777]: 2026-02-19 20:38:58.122 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:38:58 compute-0 nova_compute[188777]: 2026-02-19 20:38:58.122 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:38:58 compute-0 nova_compute[188777]: 2026-02-19 20:38:58.152 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Refreshing inventories for resource provider c266959e-952e-41ad-bc2e-56513f39ec2d _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 19 20:38:58 compute-0 nova_compute[188777]: 2026-02-19 20:38:58.188 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Updating ProviderTree inventory for provider c266959e-952e-41ad-bc2e-56513f39ec2d from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 19 20:38:58 compute-0 nova_compute[188777]: 2026-02-19 20:38:58.189 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Updating inventory in ProviderTree for provider c266959e-952e-41ad-bc2e-56513f39ec2d with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 19 20:38:58 compute-0 nova_compute[188777]: 2026-02-19 20:38:58.215 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Refreshing aggregate associations for resource provider c266959e-952e-41ad-bc2e-56513f39ec2d, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 19 20:38:58 compute-0 nova_compute[188777]: 2026-02-19 20:38:58.243 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Refreshing trait associations for resource provider c266959e-952e-41ad-bc2e-56513f39ec2d, traits: HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,HW_CPU_X86_SHA,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NODE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,HW_CPU_X86_F16C,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_ABM,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_MMX,HW_CPU_X86_BMI2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 19 20:38:58 compute-0 nova_compute[188777]: 2026-02-19 20:38:58.349 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:38:58 compute-0 nova_compute[188777]: 2026-02-19 20:38:58.369 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:38:58 compute-0 nova_compute[188777]: 2026-02-19 20:38:58.391 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:38:58 compute-0 nova_compute[188777]: 2026-02-19 20:38:58.392 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.362s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:38:58 compute-0 nova_compute[188777]: 2026-02-19 20:38:58.846 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:38:59 compute-0 podman[254959]: 2026-02-19 20:38:59.387746957 +0000 UTC m=+0.075065624 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 19 20:38:59 compute-0 nova_compute[188777]: 2026-02-19 20:38:59.389 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:38:59 compute-0 podman[204724]: time="2026-02-19T20:38:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:38:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:38:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31705 "" "Go-http-client/1.1"
Feb 19 20:38:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:38:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5310 "" "Go-http-client/1.1"
Feb 19 20:39:01 compute-0 nova_compute[188777]: 2026-02-19 20:39:01.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:39:01 compute-0 openstack_network_exporter[207898]: ERROR   20:39:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:39:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:39:01 compute-0 openstack_network_exporter[207898]: ERROR   20:39:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:39:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:39:02 compute-0 nova_compute[188777]: 2026-02-19 20:39:02.260 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:39:02 compute-0 nova_compute[188777]: 2026-02-19 20:39:02.291 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:39:02 compute-0 nova_compute[188777]: 2026-02-19 20:39:02.292 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:39:02 compute-0 nova_compute[188777]: 2026-02-19 20:39:02.533 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "refresh_cache-dff9d513-54f8-4d73-acf7-df610dc4d064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:39:02 compute-0 nova_compute[188777]: 2026-02-19 20:39:02.534 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquired lock "refresh_cache-dff9d513-54f8-4d73-acf7-df610dc4d064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:39:02 compute-0 nova_compute[188777]: 2026-02-19 20:39:02.535 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 19 20:39:02 compute-0 nova_compute[188777]: 2026-02-19 20:39:02.871 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:39:03 compute-0 nova_compute[188777]: 2026-02-19 20:39:03.849 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:39:04 compute-0 podman[254978]: 2026-02-19 20:39:04.362080455 +0000 UTC m=+0.054648759 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, distribution-scope=public, io.openshift.expose-services=, release=1214.1726694543, release-0.7.12=, architecture=x86_64, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., config_id=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9)
Feb 19 20:39:04 compute-0 podman[254979]: 2026-02-19 20:39:04.372887331 +0000 UTC m=+0.061765071 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:39:06 compute-0 nova_compute[188777]: 2026-02-19 20:39:06.299 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Updating instance_info_cache with network_info: [{"id": "913d86d2-685f-4393-9143-efa6e9c6941a", "address": "fa:16:3e:c2:a8:ee", "network": {"id": "2194f0b2-0b56-4fa1-a2f7-0ec7651876c4", "bridge": "br-int", "label": "tempest-network-smoke--1477620676", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb9e3732b9f4456d9f90bf3e156f6f7c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap913d86d2-68", "ovs_interfaceid": "913d86d2-685f-4393-9143-efa6e9c6941a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:39:06 compute-0 nova_compute[188777]: 2026-02-19 20:39:06.313 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Releasing lock "refresh_cache-dff9d513-54f8-4d73-acf7-df610dc4d064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:39:06 compute-0 nova_compute[188777]: 2026-02-19 20:39:06.313 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 19 20:39:06 compute-0 nova_compute[188777]: 2026-02-19 20:39:06.314 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:39:06 compute-0 nova_compute[188777]: 2026-02-19 20:39:06.314 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:39:07 compute-0 nova_compute[188777]: 2026-02-19 20:39:07.875 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:39:08 compute-0 podman[255019]: 2026-02-19 20:39:08.386042135 +0000 UTC m=+0.079161842 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 19 20:39:08 compute-0 nova_compute[188777]: 2026-02-19 20:39:08.851 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:39:09 compute-0 sshd-session[255017]: Received disconnect from 154.12.80.151 port 60776:11: Bye Bye [preauth]
Feb 19 20:39:09 compute-0 sshd-session[255017]: Disconnected from authenticating user root 154.12.80.151 port 60776 [preauth]
Feb 19 20:39:10 compute-0 podman[255043]: 2026-02-19 20:39:10.384713706 +0000 UTC m=+0.077598603 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260216)
Feb 19 20:39:11 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Feb 19 20:39:12 compute-0 podman[255064]: 2026-02-19 20:39:12.451274767 +0000 UTC m=+0.143312065 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:39:12 compute-0 nova_compute[188777]: 2026-02-19 20:39:12.878 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:39:13 compute-0 nova_compute[188777]: 2026-02-19 20:39:13.857 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.152 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.153 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.153 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.153 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fa4f6728800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.154 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.154 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.154 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a390>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.156 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.156 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.156 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.156 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.156 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.156 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.157 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.157 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.157 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.157 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.157 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.157 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.158 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.158 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.160 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '997ebdcf-7eab-485b-8fbf-d21112c78946', 'name': 'tempest-AttachInterfacesUnderV243Test-server-684728485', 'flavor': {'id': '68c4e072-7c2b-48a1-8e07-0fd69e153270', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '17b9bce8-a91b-495d-ac33-cf63893413f9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000009', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '54ce0de2bf12421a9458013ccaa2dcad', 'user_id': '90c9e30d17534357bece36d1acaab39c', 'hostId': 'f46cf9989db3abf7517c94fba8fc996a8b55c81d8ccd61b23f3020bd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.163 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'dff9d513-54f8-4d73-acf7-df610dc4d064', 'name': 'tempest-TestNetworkBasicOps-server-215985627', 'flavor': {'id': '68c4e072-7c2b-48a1-8e07-0fd69e153270', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '17b9bce8-a91b-495d-ac33-cf63893413f9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'eb9e3732b9f4456d9f90bf3e156f6f7c', 'user_id': 'ef20d0162e404953a8f45beac9fadf18', 'hostId': 'f5b284f60221ec4908d310f9d0c4e0647a5dcc4e862839352782ffc8', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.168 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1b6b1397-fda7-4470-883b-1cc5974fac84', 'name': 'te-4749372-asg-gqiuwwiovj7t-inxwtqyxfrgl-i7ynim6swjio', 'flavor': {'id': '68c4e072-7c2b-48a1-8e07-0fd69e153270', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'e98a7b34-d7ef-4dcd-b1f3-0a369d480f18'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000c', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3e54c3b3dadc42fca16da4cb7212a2db', 'user_id': '4495bf20aedd42ff97fdae62ef729522', 'hostId': '22c8c0ddb7108a2907037af7b4f06c9d19e2238520664206bd96d609', 'status': 'active', 'metadata': {'metering.server_group': '08c5967c-a408-49e3-be73-425b7dd8ee8c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.168 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.168 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.168 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.168 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.169 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-19T20:39:15.168638) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.174 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.180 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.188 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.201 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.202 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fa4f672a480>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.202 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.202 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fa4f672a180>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.203 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.203 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.203 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.203 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.204 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.outgoing.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.204 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.outgoing.packets volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.205 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.206 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.206 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fa4f672bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.206 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.206 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.207 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.207 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.207 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.incoming.bytes.delta volume: 2547 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.209 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.210 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.incoming.bytes.delta volume: 1262 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.210 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.210 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fa4f672a270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.211 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.211 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.211 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.211 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.211 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.outgoing.bytes volume: 3390 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.211 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.outgoing.bytes volume: 15886 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.211 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.212 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.212 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fa4f6728ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.212 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.212 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.212 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.212 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.213 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-19T20:39:15.203756) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.213 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-19T20:39:15.207346) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.213 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-19T20:39:15.211251) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.213 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-19T20:39:15.212602) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.231 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.257 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.275 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.276 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.277 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fa4f672a300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.277 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.277 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.277 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.277 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.277 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.outgoing.bytes.delta volume: 1770 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.278 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.277 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-19T20:39:15.277379) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.278 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.outgoing.bytes.delta volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.278 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.278 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fa4f672ab70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.278 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.278 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.278 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.279 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.279 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-19T20:39:15.278983) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.290 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.291 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.309 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.310 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.326 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.327 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.327 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.327 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fa4f6728290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.327 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.327 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.328 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.328 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.328 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-19T20:39:15.328068) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.359 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.bytes volume: 30759424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.360 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.403 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.bytes volume: 30591488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.404 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.bytes volume: 274750 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.464 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.read.bytes volume: 30145536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.465 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.465 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.465 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fa4f69216a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.466 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.466 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.466 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.466 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.466 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/cpu volume: 34210000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.466 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/cpu volume: 33930000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.467 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/cpu volume: 134060000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.467 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.467 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fa4f67286b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.467 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.468 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fa4f67283b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.468 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.468 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.468 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.468 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.468 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.latency volume: 893810108 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.469 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.latency volume: 72441655 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.469 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.latency volume: 1154094577 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.469 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.latency volume: 68730024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.469 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.read.latency volume: 873501798 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.470 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.read.latency volume: 81108509 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.470 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.470 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fa4f672a120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.471 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.471 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.471 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.471 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-19T20:39:15.466498) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.471 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.471 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-19T20:39:15.468590) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.471 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.472 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.472 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.473 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.473 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fa4f672a1b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.473 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.473 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.473 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.473 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.473 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.474 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.474 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.474 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.475 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fa4f6728410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.475 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.475 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.475 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.475 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.475 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.requests volume: 1111 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.476 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.476 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.requests volume: 1098 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.476 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.requests volume: 108 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.476 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.read.requests volume: 1092 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.477 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.477 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.477 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fa4f672a150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.478 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.478 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.478 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.478 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.478 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.478 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.incoming.packets volume: 115 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.479 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.incoming.packets volume: 9 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.479 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.479 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fa4f6728470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.479 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.479 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.480 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.480 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.480 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.480 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.480 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.481 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.481 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.481 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.482 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.482 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fa4f68f6030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.482 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.482 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.482 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.482 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.482 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.bytes volume: 73101312 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.483 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.483 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.bytes volume: 73109504 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.483 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.484 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.write.bytes volume: 72884224 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.484 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.485 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.485 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fa4f672ab10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.485 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.485 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.485 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.485 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.485 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.486 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.486 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.486 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.486 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.allocation volume: 30023680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.487 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.487 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.487 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fa4f6728500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.487 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.488 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.488 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.488 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.488 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.latency volume: 3085349853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.488 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.488 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.latency volume: 15219964748 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.489 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.489 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-19T20:39:15.471830) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.489 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-19T20:39:15.473789) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.489 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-19T20:39:15.475701) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.490 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-19T20:39:15.478392) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.490 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-19T20:39:15.480117) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.490 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-19T20:39:15.482840) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.490 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.write.latency volume: 3607776581 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.490 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-19T20:39:15.485761) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.490 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-19T20:39:15.488269) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.490 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.491 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.491 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fa4f672a0c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.491 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.491 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.491 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.492 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.492 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.492 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.492 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.493 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.493 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fa4f6728560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.493 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.494 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.494 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.494 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.494 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.requests volume: 328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.494 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.495 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.requests volume: 306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.495 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.495 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.write.requests volume: 284 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.496 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.496 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.497 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fa4f67285c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.497 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.497 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.497 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.497 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.498 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.498 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fa4f6728620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.498 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.499 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-19T20:39:15.492026) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.499 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.499 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-19T20:39:15.494310) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.499 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.499 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-19T20:39:15.497812) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.499 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.500 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.500 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fa4f672be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.500 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.500 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.500 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.500 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.501 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-19T20:39:15.499658) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.501 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-19T20:39:15.500911) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.501 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/memory.usage volume: 42.91015625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.501 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/memory.usage volume: 42.9375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.501 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/memory.usage volume: 43.578125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.502 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.502 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fa4f672be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.502 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.502 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.502 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.502 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.502 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.incoming.bytes volume: 4343 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.503 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.incoming.bytes volume: 20170 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.503 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.incoming.bytes volume: 1352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.503 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.504 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.504 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.504 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.504 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.505 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-19T20:39:15.502835) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.505 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.505 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.505 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.505 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.505 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.505 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.505 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.505 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.505 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.505 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.505 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.506 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.506 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.506 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.506 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.506 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.506 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.506 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.506 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.506 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.506 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:39:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:39:15.506 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:39:17 compute-0 nova_compute[188777]: 2026-02-19 20:39:17.882 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:39:18 compute-0 nova_compute[188777]: 2026-02-19 20:39:18.859 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:39:22 compute-0 nova_compute[188777]: 2026-02-19 20:39:22.889 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:39:23 compute-0 nova_compute[188777]: 2026-02-19 20:39:23.862 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:39:26 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:39:26.391 108175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1e:ad:15', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:0d:ba:1d:25:53'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 19 20:39:26 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:39:26.393 108175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 19 20:39:26 compute-0 nova_compute[188777]: 2026-02-19 20:39:26.397 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:39:27 compute-0 podman[255094]: 2026-02-19 20:39:27.409427714 +0000 UTC m=+0.079517873 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 19 20:39:27 compute-0 podman[255093]: 2026-02-19 20:39:27.424961416 +0000 UTC m=+0.094432906 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347, managed_by=edpm_ansible, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, vcs-type=git, version=9.7, name=ubi9/ubi-minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7)
Feb 19 20:39:27 compute-0 nova_compute[188777]: 2026-02-19 20:39:27.894 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:39:28 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:39:28.396 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e2fe6bb6-fad0-4563-8388-215a30f03e3f, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:39:28 compute-0 nova_compute[188777]: 2026-02-19 20:39:28.865 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:39:29 compute-0 podman[204724]: time="2026-02-19T20:39:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:39:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:39:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31705 "" "Go-http-client/1.1"
Feb 19 20:39:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:39:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5314 "" "Go-http-client/1.1"
Feb 19 20:39:30 compute-0 podman[255138]: 2026-02-19 20:39:30.425479076 +0000 UTC m=+0.098675077 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 20:39:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:39:30.456 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:39:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:39:30.457 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:39:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:39:30.458 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:39:31 compute-0 openstack_network_exporter[207898]: ERROR   20:39:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:39:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:39:31 compute-0 openstack_network_exporter[207898]: ERROR   20:39:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:39:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:39:32 compute-0 nova_compute[188777]: 2026-02-19 20:39:32.897 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:39:33 compute-0 nova_compute[188777]: 2026-02-19 20:39:33.867 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:39:35 compute-0 podman[255154]: 2026-02-19 20:39:35.410437185 +0000 UTC m=+0.098280656 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, config_id=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1214.1726694543, architecture=x86_64, com.redhat.component=ubi9-container, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, name=ubi9, version=9.4, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=)
Feb 19 20:39:35 compute-0 podman[255155]: 2026-02-19 20:39:35.446869547 +0000 UTC m=+0.126817423 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS)
Feb 19 20:39:37 compute-0 nova_compute[188777]: 2026-02-19 20:39:37.900 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:39:38 compute-0 nova_compute[188777]: 2026-02-19 20:39:38.869 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:39:39 compute-0 podman[255192]: 2026-02-19 20:39:39.393337258 +0000 UTC m=+0.069409838 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 19 20:39:41 compute-0 podman[255215]: 2026-02-19 20:39:41.361752079 +0000 UTC m=+0.050488620 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 19 20:39:42 compute-0 nova_compute[188777]: 2026-02-19 20:39:42.905 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:39:43 compute-0 podman[255239]: 2026-02-19 20:39:43.429497797 +0000 UTC m=+0.114991926 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 20:39:43 compute-0 sshd-session[255235]: Invalid user x from 103.119.94.10 port 39716
Feb 19 20:39:43 compute-0 nova_compute[188777]: 2026-02-19 20:39:43.872 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:39:44 compute-0 sshd-session[255235]: Received disconnect from 103.119.94.10 port 39716:11: Bye Bye [preauth]
Feb 19 20:39:44 compute-0 sshd-session[255235]: Disconnected from invalid user x 103.119.94.10 port 39716 [preauth]
Feb 19 20:39:44 compute-0 sshd-session[255237]: Invalid user claude from 158.180.74.7 port 39866
Feb 19 20:39:44 compute-0 sshd-session[255237]: Received disconnect from 158.180.74.7 port 39866:11: Bye Bye [preauth]
Feb 19 20:39:44 compute-0 sshd-session[255237]: Disconnected from invalid user claude 158.180.74.7 port 39866 [preauth]
Feb 19 20:39:47 compute-0 nova_compute[188777]: 2026-02-19 20:39:47.910 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:39:48 compute-0 sshd-session[255265]: Received disconnect from 83.235.16.111 port 47874:11: Bye Bye [preauth]
Feb 19 20:39:48 compute-0 sshd-session[255265]: Disconnected from authenticating user root 83.235.16.111 port 47874 [preauth]
Feb 19 20:39:48 compute-0 nova_compute[188777]: 2026-02-19 20:39:48.875 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:39:49 compute-0 nova_compute[188777]: 2026-02-19 20:39:49.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:39:49 compute-0 nova_compute[188777]: 2026-02-19 20:39:49.265 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:39:52 compute-0 nova_compute[188777]: 2026-02-19 20:39:52.915 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:39:53 compute-0 nova_compute[188777]: 2026-02-19 20:39:53.879 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:39:54 compute-0 nova_compute[188777]: 2026-02-19 20:39:54.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:39:55 compute-0 nova_compute[188777]: 2026-02-19 20:39:55.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:39:56 compute-0 nova_compute[188777]: 2026-02-19 20:39:56.757 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:39:57 compute-0 nova_compute[188777]: 2026-02-19 20:39:57.920 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:39:58 compute-0 nova_compute[188777]: 2026-02-19 20:39:58.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:39:58 compute-0 nova_compute[188777]: 2026-02-19 20:39:58.292 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:39:58 compute-0 nova_compute[188777]: 2026-02-19 20:39:58.293 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:39:58 compute-0 nova_compute[188777]: 2026-02-19 20:39:58.294 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:39:58 compute-0 nova_compute[188777]: 2026-02-19 20:39:58.294 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:39:58 compute-0 podman[255268]: 2026-02-19 20:39:58.377736905 +0000 UTC m=+0.067134297 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9/ubi-minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., release=1770267347, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.buildah.version=1.33.7, io.openshift.expose-services=, managed_by=edpm_ansible, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, version=9.7, config_id=openstack_network_exporter, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 19 20:39:58 compute-0 nova_compute[188777]: 2026-02-19 20:39:58.388 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:39:58 compute-0 podman[255269]: 2026-02-19 20:39:58.397381046 +0000 UTC m=+0.080090501 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 19 20:39:58 compute-0 nova_compute[188777]: 2026-02-19 20:39:58.435 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:39:58 compute-0 nova_compute[188777]: 2026-02-19 20:39:58.436 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:39:58 compute-0 nova_compute[188777]: 2026-02-19 20:39:58.508 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:39:58 compute-0 nova_compute[188777]: 2026-02-19 20:39:58.517 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:39:58 compute-0 nova_compute[188777]: 2026-02-19 20:39:58.570 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:39:58 compute-0 nova_compute[188777]: 2026-02-19 20:39:58.571 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:39:58 compute-0 nova_compute[188777]: 2026-02-19 20:39:58.657 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:39:58 compute-0 nova_compute[188777]: 2026-02-19 20:39:58.665 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:39:58 compute-0 nova_compute[188777]: 2026-02-19 20:39:58.731 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:39:58 compute-0 nova_compute[188777]: 2026-02-19 20:39:58.732 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:39:58 compute-0 nova_compute[188777]: 2026-02-19 20:39:58.784 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:39:58 compute-0 nova_compute[188777]: 2026-02-19 20:39:58.882 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:39:59 compute-0 nova_compute[188777]: 2026-02-19 20:39:59.103 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:39:59 compute-0 nova_compute[188777]: 2026-02-19 20:39:59.105 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4804MB free_disk=72.08408737182617GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:39:59 compute-0 nova_compute[188777]: 2026-02-19 20:39:59.105 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:39:59 compute-0 nova_compute[188777]: 2026-02-19 20:39:59.105 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:39:59 compute-0 nova_compute[188777]: 2026-02-19 20:39:59.262 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 997ebdcf-7eab-485b-8fbf-d21112c78946 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:39:59 compute-0 nova_compute[188777]: 2026-02-19 20:39:59.263 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance dff9d513-54f8-4d73-acf7-df610dc4d064 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:39:59 compute-0 nova_compute[188777]: 2026-02-19 20:39:59.263 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 1b6b1397-fda7-4470-883b-1cc5974fac84 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:39:59 compute-0 nova_compute[188777]: 2026-02-19 20:39:59.263 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:39:59 compute-0 nova_compute[188777]: 2026-02-19 20:39:59.264 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:39:59 compute-0 nova_compute[188777]: 2026-02-19 20:39:59.380 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:39:59 compute-0 nova_compute[188777]: 2026-02-19 20:39:59.395 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:39:59 compute-0 nova_compute[188777]: 2026-02-19 20:39:59.397 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:39:59 compute-0 nova_compute[188777]: 2026-02-19 20:39:59.398 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.292s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:39:59 compute-0 nova_compute[188777]: 2026-02-19 20:39:59.399 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:39:59 compute-0 nova_compute[188777]: 2026-02-19 20:39:59.399 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 19 20:39:59 compute-0 podman[204724]: time="2026-02-19T20:39:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:39:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:39:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31705 "" "Go-http-client/1.1"
Feb 19 20:39:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:39:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5316 "" "Go-http-client/1.1"
Feb 19 20:40:01 compute-0 nova_compute[188777]: 2026-02-19 20:40:01.407 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:40:01 compute-0 podman[255327]: 2026-02-19 20:40:01.411315312 +0000 UTC m=+0.085890220 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent)
Feb 19 20:40:01 compute-0 openstack_network_exporter[207898]: ERROR   20:40:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:40:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:40:01 compute-0 openstack_network_exporter[207898]: ERROR   20:40:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:40:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:40:01 compute-0 nova_compute[188777]: 2026-02-19 20:40:01.819 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:40:02 compute-0 nova_compute[188777]: 2026-02-19 20:40:02.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:40:02 compute-0 nova_compute[188777]: 2026-02-19 20:40:02.265 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:40:02 compute-0 nova_compute[188777]: 2026-02-19 20:40:02.480 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "refresh_cache-1b6b1397-fda7-4470-883b-1cc5974fac84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:40:02 compute-0 nova_compute[188777]: 2026-02-19 20:40:02.480 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquired lock "refresh_cache-1b6b1397-fda7-4470-883b-1cc5974fac84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:40:02 compute-0 nova_compute[188777]: 2026-02-19 20:40:02.480 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 19 20:40:02 compute-0 nova_compute[188777]: 2026-02-19 20:40:02.924 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:40:03 compute-0 nova_compute[188777]: 2026-02-19 20:40:03.651 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Updating instance_info_cache with network_info: [{"id": "3b9e0369-31ef-4446-b291-70f0cbddeb63", "address": "fa:16:3e:56:ea:b9", "network": {"id": "03b0387c-cb4d-416d-b212-4d980b66cbe2", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e54c3b3dadc42fca16da4cb7212a2db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b9e0369-31", "ovs_interfaceid": "3b9e0369-31ef-4446-b291-70f0cbddeb63", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:40:03 compute-0 nova_compute[188777]: 2026-02-19 20:40:03.670 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Releasing lock "refresh_cache-1b6b1397-fda7-4470-883b-1cc5974fac84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:40:03 compute-0 nova_compute[188777]: 2026-02-19 20:40:03.670 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 19 20:40:03 compute-0 nova_compute[188777]: 2026-02-19 20:40:03.671 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:40:03 compute-0 nova_compute[188777]: 2026-02-19 20:40:03.671 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:40:03 compute-0 nova_compute[188777]: 2026-02-19 20:40:03.884 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:40:04 compute-0 nova_compute[188777]: 2026-02-19 20:40:04.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:40:06 compute-0 podman[255346]: 2026-02-19 20:40:06.379327735 +0000 UTC m=+0.068714138 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, vcs-type=git, io.openshift.expose-services=, managed_by=edpm_ansible, distribution-scope=public, container_name=kepler, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, name=ubi9, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Feb 19 20:40:06 compute-0 podman[255347]: 2026-02-19 20:40:06.401419261 +0000 UTC m=+0.086276082 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Feb 19 20:40:07 compute-0 nova_compute[188777]: 2026-02-19 20:40:07.928 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:40:08 compute-0 nova_compute[188777]: 2026-02-19 20:40:08.887 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:40:10 compute-0 podman[255383]: 2026-02-19 20:40:10.366909713 +0000 UTC m=+0.059328084 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 19 20:40:10 compute-0 ovn_controller[98843]: 2026-02-19T20:40:10Z|00141|binding|INFO|Releasing lport ac510fcf-4783-4f81-b107-f5dac80c5fad from this chassis (sb_readonly=0)
Feb 19 20:40:10 compute-0 ovn_controller[98843]: 2026-02-19T20:40:10Z|00142|binding|INFO|Releasing lport 55b38ec7-c28d-4985-87ac-ac8d24f4e97c from this chassis (sb_readonly=0)
Feb 19 20:40:10 compute-0 ovn_controller[98843]: 2026-02-19T20:40:10Z|00143|binding|INFO|Releasing lport a514a3b0-3622-43cb-93f5-1ce2f2eacb84 from this chassis (sb_readonly=0)
Feb 19 20:40:10 compute-0 nova_compute[188777]: 2026-02-19 20:40:10.728 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:40:12 compute-0 podman[255408]: 2026-02-19 20:40:12.361972383 +0000 UTC m=+0.054125374 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, config_id=ceilometer_agent_compute)
Feb 19 20:40:12 compute-0 nova_compute[188777]: 2026-02-19 20:40:12.931 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:40:13 compute-0 nova_compute[188777]: 2026-02-19 20:40:13.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:40:13 compute-0 nova_compute[188777]: 2026-02-19 20:40:13.888 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:40:14 compute-0 podman[255427]: 2026-02-19 20:40:14.433836299 +0000 UTC m=+0.116752361 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 19 20:40:17 compute-0 nova_compute[188777]: 2026-02-19 20:40:17.935 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:40:18 compute-0 nova_compute[188777]: 2026-02-19 20:40:18.889 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:40:20 compute-0 nova_compute[188777]: 2026-02-19 20:40:20.278 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:40:20 compute-0 nova_compute[188777]: 2026-02-19 20:40:20.279 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 19 20:40:20 compute-0 nova_compute[188777]: 2026-02-19 20:40:20.295 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 19 20:40:22 compute-0 nova_compute[188777]: 2026-02-19 20:40:22.223 188781 DEBUG oslo_concurrency.lockutils [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Acquiring lock "c7d04a5a-1e2f-40c2-a686-18b23a5bddfa" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:40:22 compute-0 nova_compute[188777]: 2026-02-19 20:40:22.223 188781 DEBUG oslo_concurrency.lockutils [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lock "c7d04a5a-1e2f-40c2-a686-18b23a5bddfa" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:40:22 compute-0 nova_compute[188777]: 2026-02-19 20:40:22.240 188781 DEBUG nova.compute.manager [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 19 20:40:22 compute-0 nova_compute[188777]: 2026-02-19 20:40:22.311 188781 DEBUG oslo_concurrency.lockutils [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:40:22 compute-0 nova_compute[188777]: 2026-02-19 20:40:22.312 188781 DEBUG oslo_concurrency.lockutils [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:40:22 compute-0 nova_compute[188777]: 2026-02-19 20:40:22.327 188781 DEBUG nova.virt.hardware [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 19 20:40:22 compute-0 nova_compute[188777]: 2026-02-19 20:40:22.328 188781 INFO nova.compute.claims [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Claim successful on node compute-0.ctlplane.example.com
Feb 19 20:40:22 compute-0 nova_compute[188777]: 2026-02-19 20:40:22.493 188781 DEBUG nova.compute.provider_tree [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:40:22 compute-0 nova_compute[188777]: 2026-02-19 20:40:22.527 188781 DEBUG nova.scheduler.client.report [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:40:22 compute-0 nova_compute[188777]: 2026-02-19 20:40:22.561 188781 DEBUG oslo_concurrency.lockutils [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.248s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:40:22 compute-0 nova_compute[188777]: 2026-02-19 20:40:22.562 188781 DEBUG nova.compute.manager [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 19 20:40:22 compute-0 nova_compute[188777]: 2026-02-19 20:40:22.607 188781 DEBUG nova.compute.manager [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 19 20:40:22 compute-0 nova_compute[188777]: 2026-02-19 20:40:22.608 188781 DEBUG nova.network.neutron [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 19 20:40:22 compute-0 nova_compute[188777]: 2026-02-19 20:40:22.631 188781 INFO nova.virt.libvirt.driver [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 19 20:40:22 compute-0 nova_compute[188777]: 2026-02-19 20:40:22.648 188781 DEBUG nova.compute.manager [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 19 20:40:22 compute-0 nova_compute[188777]: 2026-02-19 20:40:22.747 188781 DEBUG nova.compute.manager [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 19 20:40:22 compute-0 nova_compute[188777]: 2026-02-19 20:40:22.749 188781 DEBUG nova.virt.libvirt.driver [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 19 20:40:22 compute-0 nova_compute[188777]: 2026-02-19 20:40:22.750 188781 INFO nova.virt.libvirt.driver [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Creating image(s)
Feb 19 20:40:22 compute-0 nova_compute[188777]: 2026-02-19 20:40:22.751 188781 DEBUG oslo_concurrency.lockutils [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Acquiring lock "/var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:40:22 compute-0 nova_compute[188777]: 2026-02-19 20:40:22.751 188781 DEBUG oslo_concurrency.lockutils [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lock "/var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:40:22 compute-0 nova_compute[188777]: 2026-02-19 20:40:22.752 188781 DEBUG oslo_concurrency.lockutils [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lock "/var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:40:22 compute-0 nova_compute[188777]: 2026-02-19 20:40:22.770 188781 DEBUG oslo_concurrency.processutils [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c4978917f5870b26b06a12225871f7dbd3da64fb --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:40:22 compute-0 nova_compute[188777]: 2026-02-19 20:40:22.850 188781 DEBUG oslo_concurrency.processutils [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c4978917f5870b26b06a12225871f7dbd3da64fb --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:40:22 compute-0 nova_compute[188777]: 2026-02-19 20:40:22.851 188781 DEBUG oslo_concurrency.lockutils [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Acquiring lock "c4978917f5870b26b06a12225871f7dbd3da64fb" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:40:22 compute-0 nova_compute[188777]: 2026-02-19 20:40:22.852 188781 DEBUG oslo_concurrency.lockutils [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lock "c4978917f5870b26b06a12225871f7dbd3da64fb" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:40:22 compute-0 nova_compute[188777]: 2026-02-19 20:40:22.865 188781 DEBUG oslo_concurrency.processutils [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c4978917f5870b26b06a12225871f7dbd3da64fb --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:40:22 compute-0 nova_compute[188777]: 2026-02-19 20:40:22.933 188781 DEBUG oslo_concurrency.processutils [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c4978917f5870b26b06a12225871f7dbd3da64fb --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:40:22 compute-0 nova_compute[188777]: 2026-02-19 20:40:22.934 188781 DEBUG oslo_concurrency.processutils [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c4978917f5870b26b06a12225871f7dbd3da64fb,backing_fmt=raw /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:40:22 compute-0 nova_compute[188777]: 2026-02-19 20:40:22.946 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:40:22 compute-0 nova_compute[188777]: 2026-02-19 20:40:22.967 188781 DEBUG oslo_concurrency.processutils [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c4978917f5870b26b06a12225871f7dbd3da64fb,backing_fmt=raw /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk 1073741824" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:40:22 compute-0 nova_compute[188777]: 2026-02-19 20:40:22.968 188781 DEBUG oslo_concurrency.lockutils [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lock "c4978917f5870b26b06a12225871f7dbd3da64fb" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:40:22 compute-0 nova_compute[188777]: 2026-02-19 20:40:22.969 188781 DEBUG oslo_concurrency.processutils [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c4978917f5870b26b06a12225871f7dbd3da64fb --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:40:23 compute-0 nova_compute[188777]: 2026-02-19 20:40:23.015 188781 DEBUG oslo_concurrency.processutils [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c4978917f5870b26b06a12225871f7dbd3da64fb --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:40:23 compute-0 nova_compute[188777]: 2026-02-19 20:40:23.016 188781 DEBUG nova.virt.disk.api [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Checking if we can resize image /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Feb 19 20:40:23 compute-0 nova_compute[188777]: 2026-02-19 20:40:23.017 188781 DEBUG oslo_concurrency.processutils [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:40:23 compute-0 nova_compute[188777]: 2026-02-19 20:40:23.081 188781 DEBUG oslo_concurrency.processutils [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:40:23 compute-0 nova_compute[188777]: 2026-02-19 20:40:23.082 188781 DEBUG nova.virt.disk.api [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Cannot resize image /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Feb 19 20:40:23 compute-0 nova_compute[188777]: 2026-02-19 20:40:23.082 188781 DEBUG nova.objects.instance [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lazy-loading 'migration_context' on Instance uuid c7d04a5a-1e2f-40c2-a686-18b23a5bddfa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:40:23 compute-0 nova_compute[188777]: 2026-02-19 20:40:23.096 188781 DEBUG nova.virt.libvirt.driver [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 19 20:40:23 compute-0 nova_compute[188777]: 2026-02-19 20:40:23.097 188781 DEBUG nova.virt.libvirt.driver [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Ensure instance console log exists: /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 19 20:40:23 compute-0 nova_compute[188777]: 2026-02-19 20:40:23.097 188781 DEBUG oslo_concurrency.lockutils [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:40:23 compute-0 nova_compute[188777]: 2026-02-19 20:40:23.098 188781 DEBUG oslo_concurrency.lockutils [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:40:23 compute-0 nova_compute[188777]: 2026-02-19 20:40:23.098 188781 DEBUG oslo_concurrency.lockutils [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:40:23 compute-0 nova_compute[188777]: 2026-02-19 20:40:23.561 188781 DEBUG nova.policy [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4495bf20aedd42ff97fdae62ef729522', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3e54c3b3dadc42fca16da4cb7212a2db', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 19 20:40:23 compute-0 nova_compute[188777]: 2026-02-19 20:40:23.891 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:40:26 compute-0 nova_compute[188777]: 2026-02-19 20:40:26.428 188781 DEBUG nova.network.neutron [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Successfully created port: 6730c115-fc6d-4fab-9c7d-1f6f4bd9e878 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 19 20:40:27 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:40:27.569 108175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1e:ad:15', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:0d:ba:1d:25:53'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 19 20:40:27 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:40:27.570 108175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 19 20:40:27 compute-0 nova_compute[188777]: 2026-02-19 20:40:27.572 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:40:27 compute-0 nova_compute[188777]: 2026-02-19 20:40:27.949 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:40:28 compute-0 nova_compute[188777]: 2026-02-19 20:40:28.563 188781 DEBUG nova.network.neutron [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Successfully updated port: 6730c115-fc6d-4fab-9c7d-1f6f4bd9e878 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 19 20:40:28 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:40:28.573 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e2fe6bb6-fad0-4563-8388-215a30f03e3f, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:40:28 compute-0 nova_compute[188777]: 2026-02-19 20:40:28.584 188781 DEBUG oslo_concurrency.lockutils [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Acquiring lock "refresh_cache-c7d04a5a-1e2f-40c2-a686-18b23a5bddfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:40:28 compute-0 nova_compute[188777]: 2026-02-19 20:40:28.585 188781 DEBUG oslo_concurrency.lockutils [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Acquired lock "refresh_cache-c7d04a5a-1e2f-40c2-a686-18b23a5bddfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:40:28 compute-0 nova_compute[188777]: 2026-02-19 20:40:28.585 188781 DEBUG nova.network.neutron [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 19 20:40:28 compute-0 nova_compute[188777]: 2026-02-19 20:40:28.685 188781 DEBUG nova.compute.manager [req-1ad9aa76-4506-4e4b-b3cb-874b2b7183cb req-877471be-5b41-4b18-9417-359c56003d17 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Received event network-changed-6730c115-fc6d-4fab-9c7d-1f6f4bd9e878 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:40:28 compute-0 nova_compute[188777]: 2026-02-19 20:40:28.685 188781 DEBUG nova.compute.manager [req-1ad9aa76-4506-4e4b-b3cb-874b2b7183cb req-877471be-5b41-4b18-9417-359c56003d17 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Refreshing instance network info cache due to event network-changed-6730c115-fc6d-4fab-9c7d-1f6f4bd9e878. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 19 20:40:28 compute-0 nova_compute[188777]: 2026-02-19 20:40:28.686 188781 DEBUG oslo_concurrency.lockutils [req-1ad9aa76-4506-4e4b-b3cb-874b2b7183cb req-877471be-5b41-4b18-9417-359c56003d17 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "refresh_cache-c7d04a5a-1e2f-40c2-a686-18b23a5bddfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:40:28 compute-0 nova_compute[188777]: 2026-02-19 20:40:28.737 188781 DEBUG nova.network.neutron [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 19 20:40:28 compute-0 nova_compute[188777]: 2026-02-19 20:40:28.893 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:40:29 compute-0 podman[255467]: 2026-02-19 20:40:29.399130568 +0000 UTC m=+0.091484035 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, vcs-type=git, container_name=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, build-date=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, name=ubi9/ubi-minimal, vendor=Red Hat, Inc., version=9.7, io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., release=1770267347, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public)
Feb 19 20:40:29 compute-0 podman[255468]: 2026-02-19 20:40:29.417422717 +0000 UTC m=+0.095418457 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 19 20:40:29 compute-0 podman[204724]: time="2026-02-19T20:40:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:40:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:40:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31705 "" "Go-http-client/1.1"
Feb 19 20:40:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:40:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5314 "" "Go-http-client/1.1"
Feb 19 20:40:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:40:30.457 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:40:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:40:30.458 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:40:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:40:30.458 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.585 188781 DEBUG nova.network.neutron [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Updating instance_info_cache with network_info: [{"id": "6730c115-fc6d-4fab-9c7d-1f6f4bd9e878", "address": "fa:16:3e:b9:4e:00", "network": {"id": "03b0387c-cb4d-416d-b212-4d980b66cbe2", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.124", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e54c3b3dadc42fca16da4cb7212a2db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6730c115-fc", "ovs_interfaceid": "6730c115-fc6d-4fab-9c7d-1f6f4bd9e878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.609 188781 DEBUG oslo_concurrency.lockutils [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Releasing lock "refresh_cache-c7d04a5a-1e2f-40c2-a686-18b23a5bddfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.609 188781 DEBUG nova.compute.manager [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Instance network_info: |[{"id": "6730c115-fc6d-4fab-9c7d-1f6f4bd9e878", "address": "fa:16:3e:b9:4e:00", "network": {"id": "03b0387c-cb4d-416d-b212-4d980b66cbe2", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.124", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e54c3b3dadc42fca16da4cb7212a2db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6730c115-fc", "ovs_interfaceid": "6730c115-fc6d-4fab-9c7d-1f6f4bd9e878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.610 188781 DEBUG oslo_concurrency.lockutils [req-1ad9aa76-4506-4e4b-b3cb-874b2b7183cb req-877471be-5b41-4b18-9417-359c56003d17 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquired lock "refresh_cache-c7d04a5a-1e2f-40c2-a686-18b23a5bddfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.610 188781 DEBUG nova.network.neutron [req-1ad9aa76-4506-4e4b-b3cb-874b2b7183cb req-877471be-5b41-4b18-9417-359c56003d17 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Refreshing network info cache for port 6730c115-fc6d-4fab-9c7d-1f6f4bd9e878 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.614 188781 DEBUG nova.virt.libvirt.driver [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Start _get_guest_xml network_info=[{"id": "6730c115-fc6d-4fab-9c7d-1f6f4bd9e878", "address": "fa:16:3e:b9:4e:00", "network": {"id": "03b0387c-cb4d-416d-b212-4d980b66cbe2", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.124", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e54c3b3dadc42fca16da4cb7212a2db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6730c115-fc", "ovs_interfaceid": "6730c115-fc6d-4fab-9c7d-1f6f4bd9e878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-19T20:36:39Z,direct_url=<?>,disk_format='qcow2',id=e98a7b34-d7ef-4dcd-b1f3-0a369d480f18,min_disk=0,min_ram=0,name='tempest-scenario-img--770255378',owner='3e54c3b3dadc42fca16da4cb7212a2db',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-19T20:36:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'encryption_secret_uuid': None, 'image_id': 'e98a7b34-d7ef-4dcd-b1f3-0a369d480f18'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.622 188781 WARNING nova.virt.libvirt.driver [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.630 188781 DEBUG nova.virt.libvirt.host [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.631 188781 DEBUG nova.virt.libvirt.host [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.640 188781 DEBUG nova.virt.libvirt.host [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.640 188781 DEBUG nova.virt.libvirt.host [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.641 188781 DEBUG nova.virt.libvirt.driver [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.641 188781 DEBUG nova.virt.hardware [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-19T20:34:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68c4e072-7c2b-48a1-8e07-0fd69e153270',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-19T20:36:39Z,direct_url=<?>,disk_format='qcow2',id=e98a7b34-d7ef-4dcd-b1f3-0a369d480f18,min_disk=0,min_ram=0,name='tempest-scenario-img--770255378',owner='3e54c3b3dadc42fca16da4cb7212a2db',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-19T20:36:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.642 188781 DEBUG nova.virt.hardware [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.642 188781 DEBUG nova.virt.hardware [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.643 188781 DEBUG nova.virt.hardware [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.643 188781 DEBUG nova.virt.hardware [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.643 188781 DEBUG nova.virt.hardware [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.644 188781 DEBUG nova.virt.hardware [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.644 188781 DEBUG nova.virt.hardware [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.644 188781 DEBUG nova.virt.hardware [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.645 188781 DEBUG nova.virt.hardware [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.645 188781 DEBUG nova.virt.hardware [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.648 188781 DEBUG nova.virt.libvirt.vif [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-19T20:40:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-4749372-asg-gqiuwwiovj7t-a22kewlbuwbg-ig3ypn6zxo3u',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-4749372-asg-gqiuwwiovj7t-a22kewlbuwbg-ig3ypn6zxo3u',id=13,image_ref='e98a7b34-d7ef-4dcd-b1f3-0a369d480f18',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='08c5967c-a408-49e3-be73-425b7dd8ee8c'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3e54c3b3dadc42fca16da4cb7212a2db',ramdisk_id='',reservation_id='r-4fi1e04j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e98a7b34-d7ef-4dcd-b1f3-0a369d480f18',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-653304289',owner_user_name='tempest-PrometheusGabbiTest-653304289-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-19T20:40:22Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='4495bf20aedd42ff97fdae62ef729522',uuid=c7d04a5a-1e2f-40c2-a686-18b23a5bddfa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6730c115-fc6d-4fab-9c7d-1f6f4bd9e878", "address": "fa:16:3e:b9:4e:00", "network": {"id": "03b0387c-cb4d-416d-b212-4d980b66cbe2", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.124", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e54c3b3dadc42fca16da4cb7212a2db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6730c115-fc", "ovs_interfaceid": "6730c115-fc6d-4fab-9c7d-1f6f4bd9e878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.649 188781 DEBUG nova.network.os_vif_util [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Converting VIF {"id": "6730c115-fc6d-4fab-9c7d-1f6f4bd9e878", "address": "fa:16:3e:b9:4e:00", "network": {"id": "03b0387c-cb4d-416d-b212-4d980b66cbe2", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.124", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e54c3b3dadc42fca16da4cb7212a2db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6730c115-fc", "ovs_interfaceid": "6730c115-fc6d-4fab-9c7d-1f6f4bd9e878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.650 188781 DEBUG nova.network.os_vif_util [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:4e:00,bridge_name='br-int',has_traffic_filtering=True,id=6730c115-fc6d-4fab-9c7d-1f6f4bd9e878,network=Network(03b0387c-cb4d-416d-b212-4d980b66cbe2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6730c115-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.651 188781 DEBUG nova.objects.instance [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lazy-loading 'pci_devices' on Instance uuid c7d04a5a-1e2f-40c2-a686-18b23a5bddfa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.666 188781 DEBUG nova.virt.libvirt.driver [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] End _get_guest_xml xml=<domain type="kvm">
Feb 19 20:40:30 compute-0 nova_compute[188777]:   <uuid>c7d04a5a-1e2f-40c2-a686-18b23a5bddfa</uuid>
Feb 19 20:40:30 compute-0 nova_compute[188777]:   <name>instance-0000000d</name>
Feb 19 20:40:30 compute-0 nova_compute[188777]:   <memory>131072</memory>
Feb 19 20:40:30 compute-0 nova_compute[188777]:   <vcpu>1</vcpu>
Feb 19 20:40:30 compute-0 nova_compute[188777]:   <metadata>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 19 20:40:30 compute-0 nova_compute[188777]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:       <nova:name>te-4749372-asg-gqiuwwiovj7t-a22kewlbuwbg-ig3ypn6zxo3u</nova:name>
Feb 19 20:40:30 compute-0 nova_compute[188777]:       <nova:creationTime>2026-02-19 20:40:30</nova:creationTime>
Feb 19 20:40:30 compute-0 nova_compute[188777]:       <nova:flavor name="m1.nano">
Feb 19 20:40:30 compute-0 nova_compute[188777]:         <nova:memory>128</nova:memory>
Feb 19 20:40:30 compute-0 nova_compute[188777]:         <nova:disk>1</nova:disk>
Feb 19 20:40:30 compute-0 nova_compute[188777]:         <nova:swap>0</nova:swap>
Feb 19 20:40:30 compute-0 nova_compute[188777]:         <nova:ephemeral>0</nova:ephemeral>
Feb 19 20:40:30 compute-0 nova_compute[188777]:         <nova:vcpus>1</nova:vcpus>
Feb 19 20:40:30 compute-0 nova_compute[188777]:       </nova:flavor>
Feb 19 20:40:30 compute-0 nova_compute[188777]:       <nova:owner>
Feb 19 20:40:30 compute-0 nova_compute[188777]:         <nova:user uuid="4495bf20aedd42ff97fdae62ef729522">tempest-PrometheusGabbiTest-653304289-project-member</nova:user>
Feb 19 20:40:30 compute-0 nova_compute[188777]:         <nova:project uuid="3e54c3b3dadc42fca16da4cb7212a2db">tempest-PrometheusGabbiTest-653304289</nova:project>
Feb 19 20:40:30 compute-0 nova_compute[188777]:       </nova:owner>
Feb 19 20:40:30 compute-0 nova_compute[188777]:       <nova:root type="image" uuid="e98a7b34-d7ef-4dcd-b1f3-0a369d480f18"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:       <nova:ports>
Feb 19 20:40:30 compute-0 nova_compute[188777]:         <nova:port uuid="6730c115-fc6d-4fab-9c7d-1f6f4bd9e878">
Feb 19 20:40:30 compute-0 nova_compute[188777]:           <nova:ip type="fixed" address="10.100.3.124" ipVersion="4"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:         </nova:port>
Feb 19 20:40:30 compute-0 nova_compute[188777]:       </nova:ports>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     </nova:instance>
Feb 19 20:40:30 compute-0 nova_compute[188777]:   </metadata>
Feb 19 20:40:30 compute-0 nova_compute[188777]:   <sysinfo type="smbios">
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <system>
Feb 19 20:40:30 compute-0 nova_compute[188777]:       <entry name="manufacturer">RDO</entry>
Feb 19 20:40:30 compute-0 nova_compute[188777]:       <entry name="product">OpenStack Compute</entry>
Feb 19 20:40:30 compute-0 nova_compute[188777]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 19 20:40:30 compute-0 nova_compute[188777]:       <entry name="serial">c7d04a5a-1e2f-40c2-a686-18b23a5bddfa</entry>
Feb 19 20:40:30 compute-0 nova_compute[188777]:       <entry name="uuid">c7d04a5a-1e2f-40c2-a686-18b23a5bddfa</entry>
Feb 19 20:40:30 compute-0 nova_compute[188777]:       <entry name="family">Virtual Machine</entry>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     </system>
Feb 19 20:40:30 compute-0 nova_compute[188777]:   </sysinfo>
Feb 19 20:40:30 compute-0 nova_compute[188777]:   <os>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <boot dev="hd"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <smbios mode="sysinfo"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:   </os>
Feb 19 20:40:30 compute-0 nova_compute[188777]:   <features>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <acpi/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <apic/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <vmcoreinfo/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:   </features>
Feb 19 20:40:30 compute-0 nova_compute[188777]:   <clock offset="utc">
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <timer name="pit" tickpolicy="delay"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <timer name="hpet" present="no"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:   </clock>
Feb 19 20:40:30 compute-0 nova_compute[188777]:   <cpu mode="host-model" match="exact">
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <topology sockets="1" cores="1" threads="1"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:   </cpu>
Feb 19 20:40:30 compute-0 nova_compute[188777]:   <devices>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <disk type="file" device="disk">
Feb 19 20:40:30 compute-0 nova_compute[188777]:       <driver name="qemu" type="qcow2" cache="none"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:       <source file="/var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:       <target dev="vda" bus="virtio"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     </disk>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <disk type="file" device="cdrom">
Feb 19 20:40:30 compute-0 nova_compute[188777]:       <driver name="qemu" type="raw" cache="none"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:       <source file="/var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.config"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:       <target dev="sda" bus="sata"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     </disk>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <interface type="ethernet">
Feb 19 20:40:30 compute-0 nova_compute[188777]:       <mac address="fa:16:3e:b9:4e:00"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:       <model type="virtio"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:       <driver name="vhost" rx_queue_size="512"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:       <mtu size="1442"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:       <target dev="tap6730c115-fc"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     </interface>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <serial type="pty">
Feb 19 20:40:30 compute-0 nova_compute[188777]:       <log file="/var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/console.log" append="off"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     </serial>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <video>
Feb 19 20:40:30 compute-0 nova_compute[188777]:       <model type="virtio"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     </video>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <input type="tablet" bus="usb"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <rng model="virtio">
Feb 19 20:40:30 compute-0 nova_compute[188777]:       <backend model="random">/dev/urandom</backend>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     </rng>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <controller type="pci" model="pcie-root-port"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <controller type="usb" index="0"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     <memballoon model="virtio">
Feb 19 20:40:30 compute-0 nova_compute[188777]:       <stats period="10"/>
Feb 19 20:40:30 compute-0 nova_compute[188777]:     </memballoon>
Feb 19 20:40:30 compute-0 nova_compute[188777]:   </devices>
Feb 19 20:40:30 compute-0 nova_compute[188777]: </domain>
Feb 19 20:40:30 compute-0 nova_compute[188777]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.668 188781 DEBUG nova.compute.manager [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Preparing to wait for external event network-vif-plugged-6730c115-fc6d-4fab-9c7d-1f6f4bd9e878 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.668 188781 DEBUG oslo_concurrency.lockutils [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Acquiring lock "c7d04a5a-1e2f-40c2-a686-18b23a5bddfa-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.668 188781 DEBUG oslo_concurrency.lockutils [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lock "c7d04a5a-1e2f-40c2-a686-18b23a5bddfa-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.669 188781 DEBUG oslo_concurrency.lockutils [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lock "c7d04a5a-1e2f-40c2-a686-18b23a5bddfa-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.669 188781 DEBUG nova.virt.libvirt.vif [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-19T20:40:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-4749372-asg-gqiuwwiovj7t-a22kewlbuwbg-ig3ypn6zxo3u',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-4749372-asg-gqiuwwiovj7t-a22kewlbuwbg-ig3ypn6zxo3u',id=13,image_ref='e98a7b34-d7ef-4dcd-b1f3-0a369d480f18',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='08c5967c-a408-49e3-be73-425b7dd8ee8c'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3e54c3b3dadc42fca16da4cb7212a2db',ramdisk_id='',reservation_id='r-4fi1e04j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e98a7b34-d7ef-4dcd-b1f3-0a369d480f18',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-653304289',owner_user_name='tempest-PrometheusGabbiTest-653304289-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-19T20:40:22Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='4495bf20aedd42ff97fdae62ef729522',uuid=c7d04a5a-1e2f-40c2-a686-18b23a5bddfa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6730c115-fc6d-4fab-9c7d-1f6f4bd9e878", "address": "fa:16:3e:b9:4e:00", "network": {"id": "03b0387c-cb4d-416d-b212-4d980b66cbe2", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.124", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e54c3b3dadc42fca16da4cb7212a2db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6730c115-fc", "ovs_interfaceid": "6730c115-fc6d-4fab-9c7d-1f6f4bd9e878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.670 188781 DEBUG nova.network.os_vif_util [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Converting VIF {"id": "6730c115-fc6d-4fab-9c7d-1f6f4bd9e878", "address": "fa:16:3e:b9:4e:00", "network": {"id": "03b0387c-cb4d-416d-b212-4d980b66cbe2", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.124", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e54c3b3dadc42fca16da4cb7212a2db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6730c115-fc", "ovs_interfaceid": "6730c115-fc6d-4fab-9c7d-1f6f4bd9e878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.670 188781 DEBUG nova.network.os_vif_util [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:4e:00,bridge_name='br-int',has_traffic_filtering=True,id=6730c115-fc6d-4fab-9c7d-1f6f4bd9e878,network=Network(03b0387c-cb4d-416d-b212-4d980b66cbe2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6730c115-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.671 188781 DEBUG os_vif [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:4e:00,bridge_name='br-int',has_traffic_filtering=True,id=6730c115-fc6d-4fab-9c7d-1f6f4bd9e878,network=Network(03b0387c-cb4d-416d-b212-4d980b66cbe2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6730c115-fc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.671 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.672 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.672 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.675 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.676 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6730c115-fc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.677 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6730c115-fc, col_values=(('external_ids', {'iface-id': '6730c115-fc6d-4fab-9c7d-1f6f4bd9e878', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b9:4e:00', 'vm-uuid': 'c7d04a5a-1e2f-40c2-a686-18b23a5bddfa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.678 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:40:30 compute-0 NetworkManager[57033]: <info>  [1771533630.6796] manager: (tap6730c115-fc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.680 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.685 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.686 188781 INFO os_vif [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:4e:00,bridge_name='br-int',has_traffic_filtering=True,id=6730c115-fc6d-4fab-9c7d-1f6f4bd9e878,network=Network(03b0387c-cb4d-416d-b212-4d980b66cbe2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6730c115-fc')
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.732 188781 DEBUG nova.virt.libvirt.driver [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.733 188781 DEBUG nova.virt.libvirt.driver [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.733 188781 DEBUG nova.virt.libvirt.driver [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] No VIF found with MAC fa:16:3e:b9:4e:00, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 19 20:40:30 compute-0 nova_compute[188777]: 2026-02-19 20:40:30.734 188781 INFO nova.virt.libvirt.driver [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Using config drive
Feb 19 20:40:31 compute-0 openstack_network_exporter[207898]: ERROR   20:40:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:40:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:40:31 compute-0 openstack_network_exporter[207898]: ERROR   20:40:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:40:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:40:31 compute-0 nova_compute[188777]: 2026-02-19 20:40:31.565 188781 INFO nova.virt.libvirt.driver [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Creating config drive at /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.config
Feb 19 20:40:31 compute-0 nova_compute[188777]: 2026-02-19 20:40:31.570 188781 DEBUG oslo_concurrency.processutils [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpg4grtlhy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:40:31 compute-0 nova_compute[188777]: 2026-02-19 20:40:31.691 188781 DEBUG oslo_concurrency.processutils [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpg4grtlhy" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:40:31 compute-0 kernel: tap6730c115-fc: entered promiscuous mode
Feb 19 20:40:31 compute-0 ovn_controller[98843]: 2026-02-19T20:40:31Z|00144|binding|INFO|Claiming lport 6730c115-fc6d-4fab-9c7d-1f6f4bd9e878 for this chassis.
Feb 19 20:40:31 compute-0 NetworkManager[57033]: <info>  [1771533631.7573] manager: (tap6730c115-fc): new Tun device (/org/freedesktop/NetworkManager/Devices/68)
Feb 19 20:40:31 compute-0 nova_compute[188777]: 2026-02-19 20:40:31.756 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:40:31 compute-0 ovn_controller[98843]: 2026-02-19T20:40:31Z|00145|binding|INFO|6730c115-fc6d-4fab-9c7d-1f6f4bd9e878: Claiming fa:16:3e:b9:4e:00 10.100.3.124
Feb 19 20:40:31 compute-0 nova_compute[188777]: 2026-02-19 20:40:31.766 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:40:31 compute-0 nova_compute[188777]: 2026-02-19 20:40:31.771 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:40:31 compute-0 ovn_controller[98843]: 2026-02-19T20:40:31Z|00146|binding|INFO|Setting lport 6730c115-fc6d-4fab-9c7d-1f6f4bd9e878 ovn-installed in OVS
Feb 19 20:40:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:40:31.769 108175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:4e:00 10.100.3.124'], port_security=['fa:16:3e:b9:4e:00 10.100.3.124'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.3.124/16', 'neutron:device_id': 'c7d04a5a-1e2f-40c2-a686-18b23a5bddfa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03b0387c-cb4d-416d-b212-4d980b66cbe2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3e54c3b3dadc42fca16da4cb7212a2db', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c84042e2-5094-46cb-8818-ed6fb8d69afe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2e658df7-7d87-44f0-8690-f7f2e1d7b0ae, chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>], logical_port=6730c115-fc6d-4fab-9c7d-1f6f4bd9e878) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 19 20:40:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:40:31.770 108175 INFO neutron.agent.ovn.metadata.agent [-] Port 6730c115-fc6d-4fab-9c7d-1f6f4bd9e878 in datapath 03b0387c-cb4d-416d-b212-4d980b66cbe2 bound to our chassis
Feb 19 20:40:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:40:31.772 108175 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 03b0387c-cb4d-416d-b212-4d980b66cbe2
Feb 19 20:40:31 compute-0 ovn_controller[98843]: 2026-02-19T20:40:31Z|00147|binding|INFO|Setting lport 6730c115-fc6d-4fab-9c7d-1f6f4bd9e878 up in Southbound
Feb 19 20:40:31 compute-0 nova_compute[188777]: 2026-02-19 20:40:31.778 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:40:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:40:31.793 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[54d11886-3b6b-44f1-8b61-2eeea6c8bfc6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:40:31 compute-0 systemd-udevd[255543]: Network interface NamePolicy= disabled on kernel command line.
Feb 19 20:40:31 compute-0 systemd-machined[158158]: New machine qemu-14-instance-0000000d.
Feb 19 20:40:31 compute-0 NetworkManager[57033]: <info>  [1771533631.8134] device (tap6730c115-fc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 19 20:40:31 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000000d.
Feb 19 20:40:31 compute-0 NetworkManager[57033]: <info>  [1771533631.8179] device (tap6730c115-fc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 19 20:40:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:40:31.819 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[eead60a9-0f47-4b53-964c-adeda55f0d91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:40:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:40:31.823 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[6cac6e29-ee3a-4793-959b-e5362ef40b1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:40:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:40:31.846 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[ae4b2d9d-9a68-4feb-a2e0-a6dd24f62914]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:40:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:40:31.859 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[2f12099c-ce99-496d-84b3-39d255fea007]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap03b0387c-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:70:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493020, 'reachable_time': 43412, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255557, 'error': None, 'target': 'ovnmeta-03b0387c-cb4d-416d-b212-4d980b66cbe2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:40:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:40:31.870 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[568f0f1d-303d-40e1-a1df-20961dd79065]: (4, ({'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tap03b0387c-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493027, 'tstamp': 493027}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255563, 'error': None, 'target': 'ovnmeta-03b0387c-cb4d-416d-b212-4d980b66cbe2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap03b0387c-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493030, 'tstamp': 493030}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255563, 'error': None, 'target': 'ovnmeta-03b0387c-cb4d-416d-b212-4d980b66cbe2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:40:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:40:31.872 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03b0387c-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:40:31 compute-0 nova_compute[188777]: 2026-02-19 20:40:31.873 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:40:31 compute-0 nova_compute[188777]: 2026-02-19 20:40:31.874 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:40:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:40:31.874 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap03b0387c-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:40:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:40:31.875 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 19 20:40:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:40:31.875 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap03b0387c-c0, col_values=(('external_ids', {'iface-id': 'ac510fcf-4783-4f81-b107-f5dac80c5fad'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:40:31 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:40:31.875 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 19 20:40:31 compute-0 podman[255523]: 2026-02-19 20:40:31.885237889 +0000 UTC m=+0.107864553 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Feb 19 20:40:32 compute-0 systemd[1]: Starting libvirt proxy daemon...
Feb 19 20:40:32 compute-0 systemd[1]: Started libvirt proxy daemon.
Feb 19 20:40:32 compute-0 nova_compute[188777]: 2026-02-19 20:40:32.113 188781 DEBUG nova.virt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Emitting event <LifecycleEvent: 1771533632.1136014, c7d04a5a-1e2f-40c2-a686-18b23a5bddfa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:40:32 compute-0 nova_compute[188777]: 2026-02-19 20:40:32.114 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] VM Started (Lifecycle Event)
Feb 19 20:40:32 compute-0 nova_compute[188777]: 2026-02-19 20:40:32.132 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:40:32 compute-0 nova_compute[188777]: 2026-02-19 20:40:32.136 188781 DEBUG nova.virt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Emitting event <LifecycleEvent: 1771533632.1136937, c7d04a5a-1e2f-40c2-a686-18b23a5bddfa => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:40:32 compute-0 nova_compute[188777]: 2026-02-19 20:40:32.136 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] VM Paused (Lifecycle Event)
Feb 19 20:40:32 compute-0 nova_compute[188777]: 2026-02-19 20:40:32.154 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:40:32 compute-0 nova_compute[188777]: 2026-02-19 20:40:32.158 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 19 20:40:32 compute-0 nova_compute[188777]: 2026-02-19 20:40:32.176 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 19 20:40:32 compute-0 nova_compute[188777]: 2026-02-19 20:40:32.599 188781 DEBUG nova.network.neutron [req-1ad9aa76-4506-4e4b-b3cb-874b2b7183cb req-877471be-5b41-4b18-9417-359c56003d17 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Updated VIF entry in instance network info cache for port 6730c115-fc6d-4fab-9c7d-1f6f4bd9e878. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 19 20:40:32 compute-0 nova_compute[188777]: 2026-02-19 20:40:32.600 188781 DEBUG nova.network.neutron [req-1ad9aa76-4506-4e4b-b3cb-874b2b7183cb req-877471be-5b41-4b18-9417-359c56003d17 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Updating instance_info_cache with network_info: [{"id": "6730c115-fc6d-4fab-9c7d-1f6f4bd9e878", "address": "fa:16:3e:b9:4e:00", "network": {"id": "03b0387c-cb4d-416d-b212-4d980b66cbe2", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.124", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e54c3b3dadc42fca16da4cb7212a2db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6730c115-fc", "ovs_interfaceid": "6730c115-fc6d-4fab-9c7d-1f6f4bd9e878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:40:32 compute-0 nova_compute[188777]: 2026-02-19 20:40:32.616 188781 DEBUG oslo_concurrency.lockutils [req-1ad9aa76-4506-4e4b-b3cb-874b2b7183cb req-877471be-5b41-4b18-9417-359c56003d17 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Releasing lock "refresh_cache-c7d04a5a-1e2f-40c2-a686-18b23a5bddfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:40:32 compute-0 nova_compute[188777]: 2026-02-19 20:40:32.668 188781 DEBUG nova.compute.manager [req-1a64cc22-b078-4c3c-8577-b43394a55f22 req-d0f10069-66d0-46cb-9d98-a922f57469e8 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Received event network-vif-plugged-6730c115-fc6d-4fab-9c7d-1f6f4bd9e878 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:40:32 compute-0 nova_compute[188777]: 2026-02-19 20:40:32.668 188781 DEBUG oslo_concurrency.lockutils [req-1a64cc22-b078-4c3c-8577-b43394a55f22 req-d0f10069-66d0-46cb-9d98-a922f57469e8 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "c7d04a5a-1e2f-40c2-a686-18b23a5bddfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:40:32 compute-0 nova_compute[188777]: 2026-02-19 20:40:32.670 188781 DEBUG oslo_concurrency.lockutils [req-1a64cc22-b078-4c3c-8577-b43394a55f22 req-d0f10069-66d0-46cb-9d98-a922f57469e8 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "c7d04a5a-1e2f-40c2-a686-18b23a5bddfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:40:32 compute-0 nova_compute[188777]: 2026-02-19 20:40:32.670 188781 DEBUG oslo_concurrency.lockutils [req-1a64cc22-b078-4c3c-8577-b43394a55f22 req-d0f10069-66d0-46cb-9d98-a922f57469e8 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "c7d04a5a-1e2f-40c2-a686-18b23a5bddfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:40:32 compute-0 nova_compute[188777]: 2026-02-19 20:40:32.670 188781 DEBUG nova.compute.manager [req-1a64cc22-b078-4c3c-8577-b43394a55f22 req-d0f10069-66d0-46cb-9d98-a922f57469e8 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Processing event network-vif-plugged-6730c115-fc6d-4fab-9c7d-1f6f4bd9e878 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 19 20:40:32 compute-0 nova_compute[188777]: 2026-02-19 20:40:32.671 188781 DEBUG nova.compute.manager [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 19 20:40:32 compute-0 nova_compute[188777]: 2026-02-19 20:40:32.674 188781 DEBUG nova.virt.driver [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] Emitting event <LifecycleEvent: 1771533632.6745377, c7d04a5a-1e2f-40c2-a686-18b23a5bddfa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:40:32 compute-0 nova_compute[188777]: 2026-02-19 20:40:32.675 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] VM Resumed (Lifecycle Event)
Feb 19 20:40:32 compute-0 nova_compute[188777]: 2026-02-19 20:40:32.677 188781 DEBUG nova.virt.libvirt.driver [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 19 20:40:32 compute-0 nova_compute[188777]: 2026-02-19 20:40:32.682 188781 INFO nova.virt.libvirt.driver [-] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Instance spawned successfully.
Feb 19 20:40:32 compute-0 nova_compute[188777]: 2026-02-19 20:40:32.682 188781 DEBUG nova.virt.libvirt.driver [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 19 20:40:32 compute-0 nova_compute[188777]: 2026-02-19 20:40:32.694 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:40:32 compute-0 nova_compute[188777]: 2026-02-19 20:40:32.705 188781 DEBUG nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 19 20:40:32 compute-0 nova_compute[188777]: 2026-02-19 20:40:32.710 188781 DEBUG nova.virt.libvirt.driver [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:40:32 compute-0 nova_compute[188777]: 2026-02-19 20:40:32.711 188781 DEBUG nova.virt.libvirt.driver [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:40:32 compute-0 nova_compute[188777]: 2026-02-19 20:40:32.711 188781 DEBUG nova.virt.libvirt.driver [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:40:32 compute-0 nova_compute[188777]: 2026-02-19 20:40:32.712 188781 DEBUG nova.virt.libvirt.driver [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:40:32 compute-0 nova_compute[188777]: 2026-02-19 20:40:32.712 188781 DEBUG nova.virt.libvirt.driver [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:40:32 compute-0 nova_compute[188777]: 2026-02-19 20:40:32.713 188781 DEBUG nova.virt.libvirt.driver [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 19 20:40:32 compute-0 nova_compute[188777]: 2026-02-19 20:40:32.755 188781 INFO nova.compute.manager [None req-530bcd84-7851-4dbe-b8db-eccd877c052d - - - - - -] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 19 20:40:32 compute-0 nova_compute[188777]: 2026-02-19 20:40:32.805 188781 INFO nova.compute.manager [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Took 10.06 seconds to spawn the instance on the hypervisor.
Feb 19 20:40:32 compute-0 nova_compute[188777]: 2026-02-19 20:40:32.805 188781 DEBUG nova.compute.manager [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:40:32 compute-0 nova_compute[188777]: 2026-02-19 20:40:32.879 188781 INFO nova.compute.manager [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Took 10.60 seconds to build instance.
Feb 19 20:40:32 compute-0 nova_compute[188777]: 2026-02-19 20:40:32.903 188781 DEBUG oslo_concurrency.lockutils [None req-e80cb43e-54df-445b-a125-24843cdd826b 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lock "c7d04a5a-1e2f-40c2-a686-18b23a5bddfa" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:40:33 compute-0 nova_compute[188777]: 2026-02-19 20:40:33.896 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:40:35 compute-0 nova_compute[188777]: 2026-02-19 20:40:35.644 188781 DEBUG nova.compute.manager [req-3511b4b7-f052-46ac-8ce7-e5e0b5f8c73f req-cfc5321f-aaae-4c02-9408-1b22a3e2444a 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Received event network-vif-plugged-6730c115-fc6d-4fab-9c7d-1f6f4bd9e878 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:40:35 compute-0 nova_compute[188777]: 2026-02-19 20:40:35.644 188781 DEBUG oslo_concurrency.lockutils [req-3511b4b7-f052-46ac-8ce7-e5e0b5f8c73f req-cfc5321f-aaae-4c02-9408-1b22a3e2444a 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "c7d04a5a-1e2f-40c2-a686-18b23a5bddfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:40:35 compute-0 nova_compute[188777]: 2026-02-19 20:40:35.645 188781 DEBUG oslo_concurrency.lockutils [req-3511b4b7-f052-46ac-8ce7-e5e0b5f8c73f req-cfc5321f-aaae-4c02-9408-1b22a3e2444a 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "c7d04a5a-1e2f-40c2-a686-18b23a5bddfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:40:35 compute-0 nova_compute[188777]: 2026-02-19 20:40:35.647 188781 DEBUG oslo_concurrency.lockutils [req-3511b4b7-f052-46ac-8ce7-e5e0b5f8c73f req-cfc5321f-aaae-4c02-9408-1b22a3e2444a 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "c7d04a5a-1e2f-40c2-a686-18b23a5bddfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:40:35 compute-0 nova_compute[188777]: 2026-02-19 20:40:35.648 188781 DEBUG nova.compute.manager [req-3511b4b7-f052-46ac-8ce7-e5e0b5f8c73f req-cfc5321f-aaae-4c02-9408-1b22a3e2444a 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] No waiting events found dispatching network-vif-plugged-6730c115-fc6d-4fab-9c7d-1f6f4bd9e878 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 19 20:40:35 compute-0 nova_compute[188777]: 2026-02-19 20:40:35.649 188781 WARNING nova.compute.manager [req-3511b4b7-f052-46ac-8ce7-e5e0b5f8c73f req-cfc5321f-aaae-4c02-9408-1b22a3e2444a 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Received unexpected event network-vif-plugged-6730c115-fc6d-4fab-9c7d-1f6f4bd9e878 for instance with vm_state active and task_state None.
Feb 19 20:40:35 compute-0 nova_compute[188777]: 2026-02-19 20:40:35.679 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:40:37 compute-0 podman[255592]: 2026-02-19 20:40:37.402591485 +0000 UTC m=+0.087463879 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_ipmi)
Feb 19 20:40:37 compute-0 podman[255591]: 2026-02-19 20:40:37.402730689 +0000 UTC m=+0.089054169 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, container_name=kepler, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, config_id=kepler, name=ubi9, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9)
Feb 19 20:40:38 compute-0 nova_compute[188777]: 2026-02-19 20:40:38.898 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:40:40 compute-0 nova_compute[188777]: 2026-02-19 20:40:40.683 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:40:41 compute-0 podman[255626]: 2026-02-19 20:40:41.417002117 +0000 UTC m=+0.103038923 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 19 20:40:43 compute-0 podman[255650]: 2026-02-19 20:40:43.418634371 +0000 UTC m=+0.107908485 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, config_id=ceilometer_agent_compute, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Feb 19 20:40:43 compute-0 nova_compute[188777]: 2026-02-19 20:40:43.899 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:40:44 compute-0 podman[255667]: 2026-02-19 20:40:44.786073163 +0000 UTC m=+0.107629867 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 19 20:40:45 compute-0 nova_compute[188777]: 2026-02-19 20:40:45.688 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:40:48 compute-0 nova_compute[188777]: 2026-02-19 20:40:48.901 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:40:50 compute-0 nova_compute[188777]: 2026-02-19 20:40:50.692 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:40:51 compute-0 nova_compute[188777]: 2026-02-19 20:40:51.281 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:40:51 compute-0 nova_compute[188777]: 2026-02-19 20:40:51.282 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:40:53 compute-0 nova_compute[188777]: 2026-02-19 20:40:53.921 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:40:55 compute-0 nova_compute[188777]: 2026-02-19 20:40:55.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:40:55 compute-0 nova_compute[188777]: 2026-02-19 20:40:55.266 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:40:55 compute-0 nova_compute[188777]: 2026-02-19 20:40:55.696 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:40:58 compute-0 nova_compute[188777]: 2026-02-19 20:40:58.929 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:40:59 compute-0 nova_compute[188777]: 2026-02-19 20:40:59.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:40:59 compute-0 nova_compute[188777]: 2026-02-19 20:40:59.288 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:40:59 compute-0 nova_compute[188777]: 2026-02-19 20:40:59.289 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:40:59 compute-0 nova_compute[188777]: 2026-02-19 20:40:59.289 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:40:59 compute-0 nova_compute[188777]: 2026-02-19 20:40:59.290 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:40:59 compute-0 nova_compute[188777]: 2026-02-19 20:40:59.392 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:40:59 compute-0 nova_compute[188777]: 2026-02-19 20:40:59.446 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:40:59 compute-0 nova_compute[188777]: 2026-02-19 20:40:59.448 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:40:59 compute-0 nova_compute[188777]: 2026-02-19 20:40:59.512 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:40:59 compute-0 nova_compute[188777]: 2026-02-19 20:40:59.522 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:40:59 compute-0 nova_compute[188777]: 2026-02-19 20:40:59.578 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:40:59 compute-0 nova_compute[188777]: 2026-02-19 20:40:59.579 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:40:59 compute-0 nova_compute[188777]: 2026-02-19 20:40:59.630 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:40:59 compute-0 nova_compute[188777]: 2026-02-19 20:40:59.637 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:40:59 compute-0 nova_compute[188777]: 2026-02-19 20:40:59.704 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:40:59 compute-0 nova_compute[188777]: 2026-02-19 20:40:59.706 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:40:59 compute-0 podman[204724]: time="2026-02-19T20:40:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:40:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:40:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31705 "" "Go-http-client/1.1"
Feb 19 20:40:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:40:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5314 "" "Go-http-client/1.1"
Feb 19 20:40:59 compute-0 nova_compute[188777]: 2026-02-19 20:40:59.762 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:40:59 compute-0 nova_compute[188777]: 2026-02-19 20:40:59.768 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:40:59 compute-0 nova_compute[188777]: 2026-02-19 20:40:59.824 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:40:59 compute-0 nova_compute[188777]: 2026-02-19 20:40:59.825 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:40:59 compute-0 nova_compute[188777]: 2026-02-19 20:40:59.884 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:41:00 compute-0 nova_compute[188777]: 2026-02-19 20:41:00.262 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:41:00 compute-0 nova_compute[188777]: 2026-02-19 20:41:00.265 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4666MB free_disk=72.08324813842773GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:41:00 compute-0 nova_compute[188777]: 2026-02-19 20:41:00.267 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:41:00 compute-0 nova_compute[188777]: 2026-02-19 20:41:00.267 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:41:00 compute-0 podman[255721]: 2026-02-19 20:41:00.384765617 +0000 UTC m=+0.063667239 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 19 20:41:00 compute-0 podman[255720]: 2026-02-19 20:41:00.39964861 +0000 UTC m=+0.079905545 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, container_name=openstack_network_exporter, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, name=ubi9/ubi-minimal, release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, io.buildah.version=1.33.7, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z)
Feb 19 20:41:00 compute-0 nova_compute[188777]: 2026-02-19 20:41:00.496 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 997ebdcf-7eab-485b-8fbf-d21112c78946 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:41:00 compute-0 nova_compute[188777]: 2026-02-19 20:41:00.498 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance dff9d513-54f8-4d73-acf7-df610dc4d064 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:41:00 compute-0 nova_compute[188777]: 2026-02-19 20:41:00.499 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 1b6b1397-fda7-4470-883b-1cc5974fac84 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:41:00 compute-0 nova_compute[188777]: 2026-02-19 20:41:00.500 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance c7d04a5a-1e2f-40c2-a686-18b23a5bddfa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:41:00 compute-0 nova_compute[188777]: 2026-02-19 20:41:00.501 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:41:00 compute-0 nova_compute[188777]: 2026-02-19 20:41:00.501 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:41:00 compute-0 nova_compute[188777]: 2026-02-19 20:41:00.625 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:41:00 compute-0 nova_compute[188777]: 2026-02-19 20:41:00.664 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:41:00 compute-0 nova_compute[188777]: 2026-02-19 20:41:00.699 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:41:00 compute-0 nova_compute[188777]: 2026-02-19 20:41:00.700 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.432s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:41:00 compute-0 nova_compute[188777]: 2026-02-19 20:41:00.700 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:41:01 compute-0 openstack_network_exporter[207898]: ERROR   20:41:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:41:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:41:01 compute-0 openstack_network_exporter[207898]: ERROR   20:41:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:41:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:41:01 compute-0 ovn_controller[98843]: 2026-02-19T20:41:01Z|00148|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Feb 19 20:41:02 compute-0 podman[255761]: 2026-02-19 20:41:02.370017651 +0000 UTC m=+0.053123802 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 19 20:41:02 compute-0 nova_compute[188777]: 2026-02-19 20:41:02.695 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:41:02 compute-0 nova_compute[188777]: 2026-02-19 20:41:02.699 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:41:02 compute-0 nova_compute[188777]: 2026-02-19 20:41:02.700 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:41:02 compute-0 nova_compute[188777]: 2026-02-19 20:41:02.700 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 19 20:41:03 compute-0 nova_compute[188777]: 2026-02-19 20:41:03.550 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "refresh_cache-997ebdcf-7eab-485b-8fbf-d21112c78946" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:41:03 compute-0 nova_compute[188777]: 2026-02-19 20:41:03.551 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquired lock "refresh_cache-997ebdcf-7eab-485b-8fbf-d21112c78946" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:41:03 compute-0 nova_compute[188777]: 2026-02-19 20:41:03.552 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 19 20:41:03 compute-0 nova_compute[188777]: 2026-02-19 20:41:03.552 188781 DEBUG nova.objects.instance [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 997ebdcf-7eab-485b-8fbf-d21112c78946 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:41:03 compute-0 nova_compute[188777]: 2026-02-19 20:41:03.932 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:41:05 compute-0 ovn_controller[98843]: 2026-02-19T20:41:05Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b9:4e:00 10.100.3.124
Feb 19 20:41:05 compute-0 ovn_controller[98843]: 2026-02-19T20:41:05Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b9:4e:00 10.100.3.124
Feb 19 20:41:05 compute-0 nova_compute[188777]: 2026-02-19 20:41:05.579 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Updating instance_info_cache with network_info: [{"id": "44b4451c-db39-42a3-a2c6-5c8c42d1669b", "address": "fa:16:3e:f7:60:ee", "network": {"id": "ef3fe901-c03c-42fd-97b9-c1f0218f248b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-572210270-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54ce0de2bf12421a9458013ccaa2dcad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44b4451c-db", "ovs_interfaceid": "44b4451c-db39-42a3-a2c6-5c8c42d1669b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:41:05 compute-0 nova_compute[188777]: 2026-02-19 20:41:05.592 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Releasing lock "refresh_cache-997ebdcf-7eab-485b-8fbf-d21112c78946" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:41:05 compute-0 nova_compute[188777]: 2026-02-19 20:41:05.593 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 19 20:41:05 compute-0 nova_compute[188777]: 2026-02-19 20:41:05.594 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:41:05 compute-0 nova_compute[188777]: 2026-02-19 20:41:05.594 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:41:05 compute-0 nova_compute[188777]: 2026-02-19 20:41:05.703 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:41:06 compute-0 nova_compute[188777]: 2026-02-19 20:41:06.154 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:41:06 compute-0 nova_compute[188777]: 2026-02-19 20:41:06.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:41:08 compute-0 podman[255792]: 2026-02-19 20:41:08.387119921 +0000 UTC m=+0.064610640 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., container_name=kepler, io.openshift.expose-services=, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.buildah.version=1.29.0, name=ubi9, io.openshift.tags=base rhel9, architecture=x86_64, release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, config_id=kepler, maintainer=Red Hat, Inc., version=9.4, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container)
Feb 19 20:41:08 compute-0 podman[255793]: 2026-02-19 20:41:08.391330421 +0000 UTC m=+0.063566366 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 19 20:41:08 compute-0 nova_compute[188777]: 2026-02-19 20:41:08.935 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:41:10 compute-0 nova_compute[188777]: 2026-02-19 20:41:10.708 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:41:12 compute-0 podman[255828]: 2026-02-19 20:41:12.362085017 +0000 UTC m=+0.048206029 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 19 20:41:13 compute-0 nova_compute[188777]: 2026-02-19 20:41:13.938 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:41:14 compute-0 podman[255853]: 2026-02-19 20:41:14.368149618 +0000 UTC m=+0.057501488 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260216, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0)
Feb 19 20:41:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:15.153 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 19 20:41:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:15.153 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 19 20:41:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:15.153 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f54ed550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:41:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:15.154 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fa4f6728800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:41:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:15.154 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f54ed550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:41:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:15.154 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f54ed550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:41:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:15.154 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f54ed550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:41:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f54ed550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:41:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f54ed550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:41:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f54ed550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:41:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f54ed550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:41:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f54ed550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:41:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f54ed550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:41:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a390>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f54ed550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:41:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f54ed550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:41:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f54ed550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:41:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f54ed550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:41:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f54ed550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:41:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f54ed550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:41:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f54ed550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:41:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f54ed550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:41:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:15.156 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f54ed550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:41:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:15.156 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f54ed550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:41:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:15.156 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f54ed550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:41:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:15.156 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f54ed550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:41:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:15.156 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f54ed550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:41:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:15.156 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f54ed550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:41:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:15.156 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f54ed550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:41:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:15.156 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f54ed550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:41:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:15.158 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance c7d04a5a-1e2f-40c2-a686-18b23a5bddfa from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Feb 19 20:41:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:15.159 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}eb82bb0a04ff18fe5ce8169193b61d179e0542ea510a5cad5008c259e31f58a8" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Feb 19 20:41:15 compute-0 podman[255875]: 2026-02-19 20:41:15.395966254 +0000 UTC m=+0.086354985 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 20:41:15 compute-0 nova_compute[188777]: 2026-02-19 20:41:15.710 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.532 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1832 Content-Type: application/json Date: Thu, 19 Feb 2026 20:41:15 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-3e6ccb54-3ea8-4f73-8dd3-5084548992fc x-openstack-request-id: req-3e6ccb54-3ea8-4f73-8dd3-5084548992fc _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.532 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "c7d04a5a-1e2f-40c2-a686-18b23a5bddfa", "name": "te-4749372-asg-gqiuwwiovj7t-a22kewlbuwbg-ig3ypn6zxo3u", "status": "ACTIVE", "tenant_id": "3e54c3b3dadc42fca16da4cb7212a2db", "user_id": "4495bf20aedd42ff97fdae62ef729522", "metadata": {"metering.server_group": "08c5967c-a408-49e3-be73-425b7dd8ee8c"}, "hostId": "22c8c0ddb7108a2907037af7b4f06c9d19e2238520664206bd96d609", "image": {"id": "e98a7b34-d7ef-4dcd-b1f3-0a369d480f18", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/e98a7b34-d7ef-4dcd-b1f3-0a369d480f18"}]}, "flavor": {"id": "68c4e072-7c2b-48a1-8e07-0fd69e153270", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/68c4e072-7c2b-48a1-8e07-0fd69e153270"}]}, "created": "2026-02-19T20:40:21Z", "updated": "2026-02-19T20:40:32Z", "addresses": {"": [{"version": 4, "addr": "10.100.3.124", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:b9:4e:00"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-02-19T20:40:32.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000d", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.532 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa used request id req-3e6ccb54-3ea8-4f73-8dd3-5084548992fc request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.534 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'c7d04a5a-1e2f-40c2-a686-18b23a5bddfa', 'name': 'te-4749372-asg-gqiuwwiovj7t-a22kewlbuwbg-ig3ypn6zxo3u', 'flavor': {'id': '68c4e072-7c2b-48a1-8e07-0fd69e153270', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'e98a7b34-d7ef-4dcd-b1f3-0a369d480f18'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000d', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3e54c3b3dadc42fca16da4cb7212a2db', 'user_id': '4495bf20aedd42ff97fdae62ef729522', 'hostId': '22c8c0ddb7108a2907037af7b4f06c9d19e2238520664206bd96d609', 'status': 'active', 'metadata': {'metering.server_group': '08c5967c-a408-49e3-be73-425b7dd8ee8c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.538 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '997ebdcf-7eab-485b-8fbf-d21112c78946', 'name': 'tempest-AttachInterfacesUnderV243Test-server-684728485', 'flavor': {'id': '68c4e072-7c2b-48a1-8e07-0fd69e153270', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '17b9bce8-a91b-495d-ac33-cf63893413f9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000009', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '54ce0de2bf12421a9458013ccaa2dcad', 'user_id': '90c9e30d17534357bece36d1acaab39c', 'hostId': 'f46cf9989db3abf7517c94fba8fc996a8b55c81d8ccd61b23f3020bd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.542 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'dff9d513-54f8-4d73-acf7-df610dc4d064', 'name': 'tempest-TestNetworkBasicOps-server-215985627', 'flavor': {'id': '68c4e072-7c2b-48a1-8e07-0fd69e153270', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '17b9bce8-a91b-495d-ac33-cf63893413f9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'eb9e3732b9f4456d9f90bf3e156f6f7c', 'user_id': 'ef20d0162e404953a8f45beac9fadf18', 'hostId': 'f5b284f60221ec4908d310f9d0c4e0647a5dcc4e862839352782ffc8', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.546 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1b6b1397-fda7-4470-883b-1cc5974fac84', 'name': 'te-4749372-asg-gqiuwwiovj7t-inxwtqyxfrgl-i7ynim6swjio', 'flavor': {'id': '68c4e072-7c2b-48a1-8e07-0fd69e153270', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'e98a7b34-d7ef-4dcd-b1f3-0a369d480f18'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000c', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3e54c3b3dadc42fca16da4cb7212a2db', 'user_id': '4495bf20aedd42ff97fdae62ef729522', 'hostId': '22c8c0ddb7108a2907037af7b4f06c9d19e2238520664206bd96d609', 'status': 'active', 'metadata': {'metering.server_group': '08c5967c-a408-49e3-be73-425b7dd8ee8c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.546 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.546 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.546 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.547 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.548 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-19T20:41:16.546983) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.551 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for c7d04a5a-1e2f-40c2-a686-18b23a5bddfa / tap6730c115-fc inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.552 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.557 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.561 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.566 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.566 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.567 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fa4f672a480>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.567 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.567 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.567 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.568 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.568 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-02-19T20:41:16.567408) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.568 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.569 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: te-4749372-asg-gqiuwwiovj7t-a22kewlbuwbg-ig3ypn6zxo3u>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-4749372-asg-gqiuwwiovj7t-a22kewlbuwbg-ig3ypn6zxo3u>]
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.569 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fa4f672a180>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.570 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.570 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.570 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.571 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.571 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-19T20:41:16.570995) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.571 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.outgoing.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.572 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.outgoing.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.572 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.outgoing.packets volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.573 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.573 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.574 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fa4f672bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.574 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.574 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.575 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.575 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.576 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.576 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-19T20:41:16.575485) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.576 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.577 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.577 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.incoming.bytes.delta volume: 168 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.578 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.578 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fa4f672a270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.579 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.579 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.579 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.580 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.580 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-19T20:41:16.580004) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.580 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.outgoing.bytes volume: 1550 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.581 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.outgoing.bytes volume: 3390 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.581 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.outgoing.bytes volume: 15886 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.582 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.582 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.583 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fa4f6728ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.583 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.583 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.584 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.584 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.584 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-19T20:41:16.584310) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.602 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.625 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.642 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.665 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.666 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.666 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fa4f672a300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.667 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.667 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.668 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.668 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.668 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-19T20:41:16.668369) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.669 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.669 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.670 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.670 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.671 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.671 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fa4f672ab70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.671 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.672 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.672 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.672 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.673 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-19T20:41:16.672777) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.689 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.690 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.703 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.703 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.716 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.717 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.730 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.730 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.731 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.731 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fa4f6728290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.732 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.732 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.732 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.733 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.733 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-19T20:41:16.733277) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.760 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.read.bytes volume: 29568000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.761 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.819 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.bytes volume: 30759424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.821 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.861 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.bytes volume: 30591488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.861 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.bytes volume: 274750 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.894 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.read.bytes volume: 30145536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.894 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.895 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.895 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fa4f69216a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.896 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.896 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.896 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.896 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.896 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/cpu volume: 42360000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.896 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-19T20:41:16.896187) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.896 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/cpu volume: 35570000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.896 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/cpu volume: 35260000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.897 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/cpu volume: 255040000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.897 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.897 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fa4f67286b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.897 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.897 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a390>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.897 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a390>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.898 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.899 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-02-19T20:41:16.897724) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.899 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.899 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: te-4749372-asg-gqiuwwiovj7t-a22kewlbuwbg-ig3ypn6zxo3u>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-4749372-asg-gqiuwwiovj7t-a22kewlbuwbg-ig3ypn6zxo3u>]
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.900 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fa4f67283b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.900 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.901 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.901 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.901 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.902 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-19T20:41:16.901691) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.902 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.read.latency volume: 816188800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.902 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.read.latency volume: 168782704 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.903 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.latency volume: 893810108 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.903 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.latency volume: 72441655 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.904 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.latency volume: 1154094577 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.904 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.latency volume: 68730024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.905 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.read.latency volume: 873501798 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.905 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.read.latency volume: 81108509 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.906 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.906 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fa4f672a120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.906 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.907 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.907 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.907 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.908 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-19T20:41:16.907784) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.908 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.909 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.909 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.909 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.910 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.910 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fa4f672a1b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.911 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.911 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.911 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.912 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.912 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-19T20:41:16.912005) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.912 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.913 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.913 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.914 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.914 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.915 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fa4f6728410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.915 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.915 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.916 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.916 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.916 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-19T20:41:16.916377) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.917 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.read.requests volume: 1061 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.917 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.917 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.requests volume: 1111 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.918 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.918 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.requests volume: 1098 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.919 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.requests volume: 108 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.919 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.read.requests volume: 1092 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.920 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.920 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.921 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fa4f672a150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.921 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.921 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.922 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.922 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.922 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-19T20:41:16.922460) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.923 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.incoming.packets volume: 10 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.924 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.924 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.incoming.packets volume: 115 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.924 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.925 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.925 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fa4f6728470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.926 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.926 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.926 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.927 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.927 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-19T20:41:16.927015) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.928 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.usage volume: 29818880 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.928 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.929 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.929 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.930 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.930 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.930 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.931 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.931 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.932 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fa4f68f6030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.932 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.932 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.933 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.933 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.933 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-19T20:41:16.933582) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.934 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.write.bytes volume: 72798208 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.934 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.935 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.bytes volume: 73101312 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.935 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.936 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.bytes volume: 73109504 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.936 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.936 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.write.bytes volume: 72884224 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.937 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.937 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.938 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fa4f672ab10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.938 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.938 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.939 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.939 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.939 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-19T20:41:16.939598) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.940 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.940 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.941 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.941 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.942 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.942 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.943 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.allocation volume: 30023680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.943 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.944 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.944 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fa4f6728500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.945 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.945 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.945 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.946 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.946 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-19T20:41:16.946018) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.946 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.write.latency volume: 3394314859 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.947 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.947 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.latency volume: 3085349853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.948 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.948 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.latency volume: 15219964748 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.949 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.949 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.write.latency volume: 3607776581 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.949 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.950 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.950 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fa4f672a0c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.951 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.951 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.951 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.952 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.952 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-19T20:41:16.952240) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.952 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.953 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.953 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.954 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.954 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.955 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fa4f6728560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.955 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.955 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.956 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.956 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.956 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-19T20:41:16.956583) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.957 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.write.requests volume: 311 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.957 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.958 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.requests volume: 328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.958 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.959 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.requests volume: 306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.959 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.959 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.write.requests volume: 284 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.960 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.961 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.961 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fa4f67285c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.961 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.962 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.962 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.962 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.963 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-19T20:41:16.962813) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.964 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.964 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fa4f6728620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.965 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.965 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.965 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.966 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.966 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-19T20:41:16.966024) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.967 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.967 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fa4f672be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.968 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.968 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.968 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.969 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.969 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-19T20:41:16.969226) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.970 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/memory.usage volume: 47.8515625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.970 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/memory.usage volume: 42.91015625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.971 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/memory.usage volume: 42.9375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.971 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/memory.usage volume: 43.578125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.972 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.972 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fa4f672be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.972 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.973 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.973 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.973 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.974 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-19T20:41:16.973779) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.974 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.incoming.bytes volume: 1346 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.975 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.incoming.bytes volume: 4343 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.975 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.incoming.bytes volume: 20170 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.976 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.976 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.977 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.977 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.977 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.977 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.977 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.977 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.978 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.978 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.978 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.978 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.978 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.978 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.978 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.978 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.978 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.978 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.978 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.978 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.979 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.979 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.979 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.979 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.979 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.979 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.979 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:41:16 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:41:16.979 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:41:18 compute-0 nova_compute[188777]: 2026-02-19 20:41:18.941 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:41:20 compute-0 nova_compute[188777]: 2026-02-19 20:41:20.713 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:41:23 compute-0 nova_compute[188777]: 2026-02-19 20:41:23.943 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:41:25 compute-0 nova_compute[188777]: 2026-02-19 20:41:25.717 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:41:28 compute-0 nova_compute[188777]: 2026-02-19 20:41:28.944 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:41:29 compute-0 podman[204724]: time="2026-02-19T20:41:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:41:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:41:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31705 "" "Go-http-client/1.1"
Feb 19 20:41:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:41:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5310 "" "Go-http-client/1.1"
Feb 19 20:41:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:41:30.459 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:41:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:41:30.459 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:41:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:41:30.460 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:41:30 compute-0 nova_compute[188777]: 2026-02-19 20:41:30.720 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:41:31 compute-0 podman[255903]: 2026-02-19 20:41:31.393348892 +0000 UTC m=+0.072050380 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 19 20:41:31 compute-0 podman[255902]: 2026-02-19 20:41:31.397913433 +0000 UTC m=+0.077803198 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, build-date=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., version=9.7, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_id=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, org.opencontainers.image.created=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, io.openshift.expose-services=, name=ubi9/ubi-minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c)
Feb 19 20:41:31 compute-0 openstack_network_exporter[207898]: ERROR   20:41:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:41:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:41:31 compute-0 openstack_network_exporter[207898]: ERROR   20:41:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:41:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:41:33 compute-0 podman[255943]: 2026-02-19 20:41:33.369663887 +0000 UTC m=+0.051564034 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Feb 19 20:41:33 compute-0 nova_compute[188777]: 2026-02-19 20:41:33.948 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:41:34 compute-0 sshd-session[255949]: Invalid user mira from 160.187.147.124 port 48194
Feb 19 20:41:34 compute-0 sshd-session[255949]: Received disconnect from 160.187.147.124 port 48194:11: Bye Bye [preauth]
Feb 19 20:41:34 compute-0 sshd-session[255949]: Disconnected from invalid user mira 160.187.147.124 port 48194 [preauth]
Feb 19 20:41:35 compute-0 nova_compute[188777]: 2026-02-19 20:41:35.724 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:41:38 compute-0 nova_compute[188777]: 2026-02-19 20:41:38.949 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:41:39 compute-0 podman[255964]: 2026-02-19 20:41:39.381358478 +0000 UTC m=+0.062484704 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9, vcs-type=git, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, architecture=x86_64, io.openshift.expose-services=, config_id=kepler, com.redhat.component=ubi9-container, distribution-scope=public, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.4)
Feb 19 20:41:39 compute-0 podman[255965]: 2026-02-19 20:41:39.405469217 +0000 UTC m=+0.085340484 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20260127, config_id=ceilometer_agent_ipmi)
Feb 19 20:41:40 compute-0 nova_compute[188777]: 2026-02-19 20:41:40.729 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:41:43 compute-0 podman[256002]: 2026-02-19 20:41:43.413204582 +0000 UTC m=+0.086080587 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 19 20:41:43 compute-0 nova_compute[188777]: 2026-02-19 20:41:43.952 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:41:44 compute-0 podman[256026]: 2026-02-19 20:41:44.754979916 +0000 UTC m=+0.088626986 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 19 20:41:45 compute-0 nova_compute[188777]: 2026-02-19 20:41:45.735 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:41:46 compute-0 podman[256044]: 2026-02-19 20:41:46.414115384 +0000 UTC m=+0.100190965 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:41:48 compute-0 nova_compute[188777]: 2026-02-19 20:41:48.954 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:41:50 compute-0 nova_compute[188777]: 2026-02-19 20:41:50.739 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:41:53 compute-0 nova_compute[188777]: 2026-02-19 20:41:53.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:41:53 compute-0 nova_compute[188777]: 2026-02-19 20:41:53.265 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:41:53 compute-0 nova_compute[188777]: 2026-02-19 20:41:53.955 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:41:55 compute-0 nova_compute[188777]: 2026-02-19 20:41:55.742 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:41:56 compute-0 nova_compute[188777]: 2026-02-19 20:41:56.266 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:41:57 compute-0 nova_compute[188777]: 2026-02-19 20:41:57.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:41:58 compute-0 sshd-session[256072]: Invalid user dixi from 103.250.11.249 port 38986
Feb 19 20:41:58 compute-0 sshd-session[256072]: Received disconnect from 103.250.11.249 port 38986:11: Bye Bye [preauth]
Feb 19 20:41:58 compute-0 sshd-session[256072]: Disconnected from invalid user dixi 103.250.11.249 port 38986 [preauth]
Feb 19 20:41:58 compute-0 nova_compute[188777]: 2026-02-19 20:41:58.958 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:41:59 compute-0 podman[204724]: time="2026-02-19T20:41:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:41:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:41:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31705 "" "Go-http-client/1.1"
Feb 19 20:41:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:41:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5311 "" "Go-http-client/1.1"
Feb 19 20:42:00 compute-0 nova_compute[188777]: 2026-02-19 20:42:00.746 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:42:01 compute-0 nova_compute[188777]: 2026-02-19 20:42:01.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:42:01 compute-0 nova_compute[188777]: 2026-02-19 20:42:01.295 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:42:01 compute-0 nova_compute[188777]: 2026-02-19 20:42:01.295 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:42:01 compute-0 nova_compute[188777]: 2026-02-19 20:42:01.296 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:42:01 compute-0 nova_compute[188777]: 2026-02-19 20:42:01.297 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:42:01 compute-0 nova_compute[188777]: 2026-02-19 20:42:01.389 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:42:01 compute-0 openstack_network_exporter[207898]: ERROR   20:42:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:42:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:42:01 compute-0 openstack_network_exporter[207898]: ERROR   20:42:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:42:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:42:01 compute-0 nova_compute[188777]: 2026-02-19 20:42:01.449 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:42:01 compute-0 nova_compute[188777]: 2026-02-19 20:42:01.450 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:42:01 compute-0 nova_compute[188777]: 2026-02-19 20:42:01.502 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:42:01 compute-0 nova_compute[188777]: 2026-02-19 20:42:01.507 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:42:01 compute-0 nova_compute[188777]: 2026-02-19 20:42:01.559 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:42:01 compute-0 nova_compute[188777]: 2026-02-19 20:42:01.560 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:42:01 compute-0 nova_compute[188777]: 2026-02-19 20:42:01.628 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:42:01 compute-0 nova_compute[188777]: 2026-02-19 20:42:01.635 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:42:01 compute-0 nova_compute[188777]: 2026-02-19 20:42:01.686 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:42:01 compute-0 nova_compute[188777]: 2026-02-19 20:42:01.688 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:42:01 compute-0 nova_compute[188777]: 2026-02-19 20:42:01.755 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:42:01 compute-0 nova_compute[188777]: 2026-02-19 20:42:01.762 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:42:01 compute-0 nova_compute[188777]: 2026-02-19 20:42:01.821 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:42:01 compute-0 nova_compute[188777]: 2026-02-19 20:42:01.822 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:42:01 compute-0 nova_compute[188777]: 2026-02-19 20:42:01.876 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:42:02 compute-0 nova_compute[188777]: 2026-02-19 20:42:02.228 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:42:02 compute-0 nova_compute[188777]: 2026-02-19 20:42:02.230 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4640MB free_disk=72.05556106567383GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:42:02 compute-0 nova_compute[188777]: 2026-02-19 20:42:02.230 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:42:02 compute-0 nova_compute[188777]: 2026-02-19 20:42:02.230 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:42:02 compute-0 nova_compute[188777]: 2026-02-19 20:42:02.310 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 997ebdcf-7eab-485b-8fbf-d21112c78946 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:42:02 compute-0 nova_compute[188777]: 2026-02-19 20:42:02.311 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance dff9d513-54f8-4d73-acf7-df610dc4d064 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:42:02 compute-0 nova_compute[188777]: 2026-02-19 20:42:02.311 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 1b6b1397-fda7-4470-883b-1cc5974fac84 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:42:02 compute-0 nova_compute[188777]: 2026-02-19 20:42:02.312 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance c7d04a5a-1e2f-40c2-a686-18b23a5bddfa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:42:02 compute-0 nova_compute[188777]: 2026-02-19 20:42:02.312 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:42:02 compute-0 nova_compute[188777]: 2026-02-19 20:42:02.312 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:42:02 compute-0 podman[256099]: 2026-02-19 20:42:02.396337941 +0000 UTC m=+0.078352516 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, vcs-type=git, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., version=9.7, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc.)
Feb 19 20:42:02 compute-0 podman[256100]: 2026-02-19 20:42:02.401990454 +0000 UTC m=+0.080012837 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 19 20:42:02 compute-0 nova_compute[188777]: 2026-02-19 20:42:02.412 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:42:02 compute-0 nova_compute[188777]: 2026-02-19 20:42:02.428 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:42:02 compute-0 nova_compute[188777]: 2026-02-19 20:42:02.429 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:42:02 compute-0 nova_compute[188777]: 2026-02-19 20:42:02.430 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.199s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:42:03 compute-0 nova_compute[188777]: 2026-02-19 20:42:03.430 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:42:03 compute-0 nova_compute[188777]: 2026-02-19 20:42:03.430 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:42:03 compute-0 nova_compute[188777]: 2026-02-19 20:42:03.431 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:42:03 compute-0 nova_compute[188777]: 2026-02-19 20:42:03.961 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:42:04 compute-0 podman[256142]: 2026-02-19 20:42:04.394511798 +0000 UTC m=+0.078988255 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb 19 20:42:04 compute-0 nova_compute[188777]: 2026-02-19 20:42:04.603 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "refresh_cache-dff9d513-54f8-4d73-acf7-df610dc4d064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:42:04 compute-0 nova_compute[188777]: 2026-02-19 20:42:04.604 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquired lock "refresh_cache-dff9d513-54f8-4d73-acf7-df610dc4d064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:42:04 compute-0 nova_compute[188777]: 2026-02-19 20:42:04.604 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 19 20:42:05 compute-0 nova_compute[188777]: 2026-02-19 20:42:05.751 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:42:07 compute-0 nova_compute[188777]: 2026-02-19 20:42:07.749 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Updating instance_info_cache with network_info: [{"id": "913d86d2-685f-4393-9143-efa6e9c6941a", "address": "fa:16:3e:c2:a8:ee", "network": {"id": "2194f0b2-0b56-4fa1-a2f7-0ec7651876c4", "bridge": "br-int", "label": "tempest-network-smoke--1477620676", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb9e3732b9f4456d9f90bf3e156f6f7c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap913d86d2-68", "ovs_interfaceid": "913d86d2-685f-4393-9143-efa6e9c6941a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:42:07 compute-0 nova_compute[188777]: 2026-02-19 20:42:07.767 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Releasing lock "refresh_cache-dff9d513-54f8-4d73-acf7-df610dc4d064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:42:07 compute-0 nova_compute[188777]: 2026-02-19 20:42:07.768 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 19 20:42:07 compute-0 nova_compute[188777]: 2026-02-19 20:42:07.768 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:42:07 compute-0 nova_compute[188777]: 2026-02-19 20:42:07.769 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:42:07 compute-0 nova_compute[188777]: 2026-02-19 20:42:07.769 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:42:08 compute-0 nova_compute[188777]: 2026-02-19 20:42:08.964 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:42:10 compute-0 podman[256169]: 2026-02-19 20:42:10.376150975 +0000 UTC m=+0.065360206 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, name=ubi9, release=1214.1726694543, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, com.redhat.component=ubi9-container, managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.tags=base rhel9, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., release-0.7.12=, version=9.4, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64)
Feb 19 20:42:10 compute-0 podman[256170]: 2026-02-19 20:42:10.377806387 +0000 UTC m=+0.064394888 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Feb 19 20:42:10 compute-0 nova_compute[188777]: 2026-02-19 20:42:10.753 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:42:13 compute-0 nova_compute[188777]: 2026-02-19 20:42:13.965 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:42:14 compute-0 podman[256208]: 2026-02-19 20:42:14.410099485 +0000 UTC m=+0.096554634 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 19 20:42:15 compute-0 podman[256232]: 2026-02-19 20:42:15.456636136 +0000 UTC m=+0.136850893 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Feb 19 20:42:15 compute-0 nova_compute[188777]: 2026-02-19 20:42:15.757 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:42:17 compute-0 podman[256250]: 2026-02-19 20:42:17.405727766 +0000 UTC m=+0.093188430 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 19 20:42:18 compute-0 nova_compute[188777]: 2026-02-19 20:42:18.969 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:42:20 compute-0 nova_compute[188777]: 2026-02-19 20:42:20.762 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:42:23 compute-0 nova_compute[188777]: 2026-02-19 20:42:23.972 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:42:25 compute-0 nova_compute[188777]: 2026-02-19 20:42:25.766 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:42:28 compute-0 nova_compute[188777]: 2026-02-19 20:42:28.973 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:42:29 compute-0 podman[204724]: time="2026-02-19T20:42:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:42:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:42:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31705 "" "Go-http-client/1.1"
Feb 19 20:42:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:42:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5316 "" "Go-http-client/1.1"
Feb 19 20:42:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:42:30.460 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:42:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:42:30.461 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:42:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:42:30.461 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:42:30 compute-0 nova_compute[188777]: 2026-02-19 20:42:30.769 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:42:31 compute-0 openstack_network_exporter[207898]: ERROR   20:42:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:42:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:42:31 compute-0 openstack_network_exporter[207898]: ERROR   20:42:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:42:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:42:32 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Feb 19 20:42:33 compute-0 podman[256280]: 2026-02-19 20:42:33.363456428 +0000 UTC m=+0.053721229 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 19 20:42:33 compute-0 podman[256279]: 2026-02-19 20:42:33.376149868 +0000 UTC m=+0.066340198 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, release=1770267347, architecture=x86_64, build-date=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, container_name=openstack_network_exporter, distribution-scope=public, version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7)
Feb 19 20:42:33 compute-0 nova_compute[188777]: 2026-02-19 20:42:33.976 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:42:35 compute-0 podman[256321]: 2026-02-19 20:42:35.391489943 +0000 UTC m=+0.072111244 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 19 20:42:35 compute-0 nova_compute[188777]: 2026-02-19 20:42:35.773 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:42:37 compute-0 sshd-session[256341]: Invalid user claude from 103.179.56.24 port 58110
Feb 19 20:42:37 compute-0 sshd-session[256341]: Received disconnect from 103.179.56.24 port 58110:11: Bye Bye [preauth]
Feb 19 20:42:37 compute-0 sshd-session[256341]: Disconnected from invalid user claude 103.179.56.24 port 58110 [preauth]
Feb 19 20:42:38 compute-0 nova_compute[188777]: 2026-02-19 20:42:38.979 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:42:40 compute-0 nova_compute[188777]: 2026-02-19 20:42:40.776 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:42:41 compute-0 podman[256343]: 2026-02-19 20:42:41.383336354 +0000 UTC m=+0.072117635 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, com.redhat.component=ubi9-container, config_id=kepler, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, release-0.7.12=, name=ubi9, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, container_name=kepler, release=1214.1726694543, maintainer=Red Hat, Inc., io.buildah.version=1.29.0)
Feb 19 20:42:41 compute-0 podman[256344]: 2026-02-19 20:42:41.383682875 +0000 UTC m=+0.071212407 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 19 20:42:43 compute-0 nova_compute[188777]: 2026-02-19 20:42:43.981 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:42:44 compute-0 podman[256382]: 2026-02-19 20:42:44.723392227 +0000 UTC m=+0.062250302 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 19 20:42:45 compute-0 nova_compute[188777]: 2026-02-19 20:42:45.779 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:42:46 compute-0 podman[256405]: 2026-02-19 20:42:46.427631283 +0000 UTC m=+0.099003230 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute)
Feb 19 20:42:48 compute-0 podman[256426]: 2026-02-19 20:42:48.41600486 +0000 UTC m=+0.105012694 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 19 20:42:48 compute-0 nova_compute[188777]: 2026-02-19 20:42:48.984 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:42:50 compute-0 nova_compute[188777]: 2026-02-19 20:42:50.783 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:42:51 compute-0 sshd-session[256453]: Received disconnect from 154.12.80.151 port 58316:11: Bye Bye [preauth]
Feb 19 20:42:51 compute-0 sshd-session[256453]: Disconnected from authenticating user root 154.12.80.151 port 58316 [preauth]
Feb 19 20:42:53 compute-0 nova_compute[188777]: 2026-02-19 20:42:53.986 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:42:54 compute-0 nova_compute[188777]: 2026-02-19 20:42:54.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:42:54 compute-0 nova_compute[188777]: 2026-02-19 20:42:54.264 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:42:55 compute-0 nova_compute[188777]: 2026-02-19 20:42:55.785 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:42:56 compute-0 nova_compute[188777]: 2026-02-19 20:42:56.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:42:57 compute-0 nova_compute[188777]: 2026-02-19 20:42:57.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:42:58 compute-0 nova_compute[188777]: 2026-02-19 20:42:58.988 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:43:00 compute-0 podman[204724]: time="2026-02-19T20:43:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:43:00 compute-0 podman[204724]: @ - - [19/Feb/2026:20:43:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31705 "" "Go-http-client/1.1"
Feb 19 20:43:00 compute-0 podman[204724]: @ - - [19/Feb/2026:20:43:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5315 "" "Go-http-client/1.1"
Feb 19 20:43:00 compute-0 nova_compute[188777]: 2026-02-19 20:43:00.790 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:43:01 compute-0 anacron[148260]: Job `cron.weekly' started
Feb 19 20:43:01 compute-0 anacron[148260]: Job `cron.weekly' terminated
Feb 19 20:43:01 compute-0 openstack_network_exporter[207898]: ERROR   20:43:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:43:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:43:01 compute-0 openstack_network_exporter[207898]: ERROR   20:43:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:43:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:43:03 compute-0 nova_compute[188777]: 2026-02-19 20:43:03.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:43:03 compute-0 nova_compute[188777]: 2026-02-19 20:43:03.264 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:43:03 compute-0 nova_compute[188777]: 2026-02-19 20:43:03.990 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:43:04 compute-0 podman[256458]: 2026-02-19 20:43:04.372196265 +0000 UTC m=+0.063442978 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, org.opencontainers.image.created=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, config_id=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, architecture=x86_64, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., managed_by=edpm_ansible)
Feb 19 20:43:04 compute-0 podman[256459]: 2026-02-19 20:43:04.406614151 +0000 UTC m=+0.091318304 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 19 20:43:04 compute-0 nova_compute[188777]: 2026-02-19 20:43:04.765 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "refresh_cache-1b6b1397-fda7-4470-883b-1cc5974fac84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:43:04 compute-0 nova_compute[188777]: 2026-02-19 20:43:04.766 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquired lock "refresh_cache-1b6b1397-fda7-4470-883b-1cc5974fac84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:43:04 compute-0 nova_compute[188777]: 2026-02-19 20:43:04.766 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 19 20:43:05 compute-0 nova_compute[188777]: 2026-02-19 20:43:05.793 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:43:06 compute-0 podman[256503]: 2026-02-19 20:43:06.394640558 +0000 UTC m=+0.084704441 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 20:43:06 compute-0 nova_compute[188777]: 2026-02-19 20:43:06.959 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Updating instance_info_cache with network_info: [{"id": "3b9e0369-31ef-4446-b291-70f0cbddeb63", "address": "fa:16:3e:56:ea:b9", "network": {"id": "03b0387c-cb4d-416d-b212-4d980b66cbe2", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e54c3b3dadc42fca16da4cb7212a2db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b9e0369-31", "ovs_interfaceid": "3b9e0369-31ef-4446-b291-70f0cbddeb63", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:43:06 compute-0 nova_compute[188777]: 2026-02-19 20:43:06.972 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Releasing lock "refresh_cache-1b6b1397-fda7-4470-883b-1cc5974fac84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:43:06 compute-0 nova_compute[188777]: 2026-02-19 20:43:06.973 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 19 20:43:06 compute-0 nova_compute[188777]: 2026-02-19 20:43:06.973 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:43:06 compute-0 nova_compute[188777]: 2026-02-19 20:43:06.973 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:43:06 compute-0 nova_compute[188777]: 2026-02-19 20:43:06.974 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:43:06 compute-0 nova_compute[188777]: 2026-02-19 20:43:06.995 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:43:06 compute-0 nova_compute[188777]: 2026-02-19 20:43:06.996 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:43:06 compute-0 nova_compute[188777]: 2026-02-19 20:43:06.996 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:43:06 compute-0 nova_compute[188777]: 2026-02-19 20:43:06.996 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:43:07 compute-0 nova_compute[188777]: 2026-02-19 20:43:07.085 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:43:07 compute-0 nova_compute[188777]: 2026-02-19 20:43:07.136 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:43:07 compute-0 nova_compute[188777]: 2026-02-19 20:43:07.138 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:43:07 compute-0 nova_compute[188777]: 2026-02-19 20:43:07.197 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:43:07 compute-0 nova_compute[188777]: 2026-02-19 20:43:07.203 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:43:07 compute-0 nova_compute[188777]: 2026-02-19 20:43:07.255 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:43:07 compute-0 nova_compute[188777]: 2026-02-19 20:43:07.256 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:43:07 compute-0 nova_compute[188777]: 2026-02-19 20:43:07.302 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:43:07 compute-0 nova_compute[188777]: 2026-02-19 20:43:07.308 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:43:07 compute-0 nova_compute[188777]: 2026-02-19 20:43:07.354 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:43:07 compute-0 nova_compute[188777]: 2026-02-19 20:43:07.355 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:43:07 compute-0 nova_compute[188777]: 2026-02-19 20:43:07.404 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:43:07 compute-0 nova_compute[188777]: 2026-02-19 20:43:07.414 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:43:07 compute-0 nova_compute[188777]: 2026-02-19 20:43:07.468 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:43:07 compute-0 nova_compute[188777]: 2026-02-19 20:43:07.470 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:43:07 compute-0 nova_compute[188777]: 2026-02-19 20:43:07.519 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:43:07 compute-0 nova_compute[188777]: 2026-02-19 20:43:07.898 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:43:07 compute-0 nova_compute[188777]: 2026-02-19 20:43:07.900 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4612MB free_disk=72.05458450317383GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:43:07 compute-0 nova_compute[188777]: 2026-02-19 20:43:07.900 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:43:07 compute-0 nova_compute[188777]: 2026-02-19 20:43:07.900 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:43:08 compute-0 nova_compute[188777]: 2026-02-19 20:43:08.994 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:43:10 compute-0 nova_compute[188777]: 2026-02-19 20:43:10.572 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 997ebdcf-7eab-485b-8fbf-d21112c78946 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:43:10 compute-0 nova_compute[188777]: 2026-02-19 20:43:10.573 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance dff9d513-54f8-4d73-acf7-df610dc4d064 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:43:10 compute-0 nova_compute[188777]: 2026-02-19 20:43:10.574 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 1b6b1397-fda7-4470-883b-1cc5974fac84 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:43:10 compute-0 nova_compute[188777]: 2026-02-19 20:43:10.574 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance c7d04a5a-1e2f-40c2-a686-18b23a5bddfa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:43:10 compute-0 nova_compute[188777]: 2026-02-19 20:43:10.575 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:43:10 compute-0 nova_compute[188777]: 2026-02-19 20:43:10.575 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:43:10 compute-0 nova_compute[188777]: 2026-02-19 20:43:10.692 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:43:10 compute-0 nova_compute[188777]: 2026-02-19 20:43:10.708 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:43:10 compute-0 nova_compute[188777]: 2026-02-19 20:43:10.710 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:43:10 compute-0 nova_compute[188777]: 2026-02-19 20:43:10.711 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:43:10 compute-0 nova_compute[188777]: 2026-02-19 20:43:10.796 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:43:11 compute-0 nova_compute[188777]: 2026-02-19 20:43:11.002 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:43:11 compute-0 nova_compute[188777]: 2026-02-19 20:43:11.003 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:43:11 compute-0 nova_compute[188777]: 2026-02-19 20:43:11.022 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:43:12 compute-0 podman[256544]: 2026-02-19 20:43:12.438684981 +0000 UTC m=+0.100978451 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Feb 19 20:43:12 compute-0 podman[256543]: 2026-02-19 20:43:12.454883557 +0000 UTC m=+0.120173379 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, container_name=kepler, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, managed_by=edpm_ansible, release=1214.1726694543, vcs-type=git, io.openshift.expose-services=)
Feb 19 20:43:13 compute-0 nova_compute[188777]: 2026-02-19 20:43:13.998 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.153 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.154 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.154 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.155 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fa4f6728800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.156 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.156 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.156 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.157 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.157 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.157 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.157 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.157 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.158 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.158 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a390>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.159 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.159 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.160 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.160 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.160 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.160 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.161 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.161 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.161 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.162 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.162 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.163 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.163 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.163 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.164 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f870d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.165 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'c7d04a5a-1e2f-40c2-a686-18b23a5bddfa', 'name': 'te-4749372-asg-gqiuwwiovj7t-a22kewlbuwbg-ig3ypn6zxo3u', 'flavor': {'id': '68c4e072-7c2b-48a1-8e07-0fd69e153270', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'e98a7b34-d7ef-4dcd-b1f3-0a369d480f18'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000d', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3e54c3b3dadc42fca16da4cb7212a2db', 'user_id': '4495bf20aedd42ff97fdae62ef729522', 'hostId': '22c8c0ddb7108a2907037af7b4f06c9d19e2238520664206bd96d609', 'status': 'active', 'metadata': {'metering.server_group': '08c5967c-a408-49e3-be73-425b7dd8ee8c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.169 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '997ebdcf-7eab-485b-8fbf-d21112c78946', 'name': 'tempest-AttachInterfacesUnderV243Test-server-684728485', 'flavor': {'id': '68c4e072-7c2b-48a1-8e07-0fd69e153270', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '17b9bce8-a91b-495d-ac33-cf63893413f9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000009', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '54ce0de2bf12421a9458013ccaa2dcad', 'user_id': '90c9e30d17534357bece36d1acaab39c', 'hostId': 'f46cf9989db3abf7517c94fba8fc996a8b55c81d8ccd61b23f3020bd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.172 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'dff9d513-54f8-4d73-acf7-df610dc4d064', 'name': 'tempest-TestNetworkBasicOps-server-215985627', 'flavor': {'id': '68c4e072-7c2b-48a1-8e07-0fd69e153270', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '17b9bce8-a91b-495d-ac33-cf63893413f9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'eb9e3732b9f4456d9f90bf3e156f6f7c', 'user_id': 'ef20d0162e404953a8f45beac9fadf18', 'hostId': 'f5b284f60221ec4908d310f9d0c4e0647a5dcc4e862839352782ffc8', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.175 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1b6b1397-fda7-4470-883b-1cc5974fac84', 'name': 'te-4749372-asg-gqiuwwiovj7t-inxwtqyxfrgl-i7ynim6swjio', 'flavor': {'id': '68c4e072-7c2b-48a1-8e07-0fd69e153270', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'e98a7b34-d7ef-4dcd-b1f3-0a369d480f18'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000c', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3e54c3b3dadc42fca16da4cb7212a2db', 'user_id': '4495bf20aedd42ff97fdae62ef729522', 'hostId': '22c8c0ddb7108a2907037af7b4f06c9d19e2238520664206bd96d609', 'status': 'active', 'metadata': {'metering.server_group': '08c5967c-a408-49e3-be73-425b7dd8ee8c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.176 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.176 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.176 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.176 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.177 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-19T20:43:15.176691) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.181 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.186 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.190 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.194 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.195 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.195 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fa4f672a480>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.196 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.196 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fa4f672a180>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.196 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.196 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.196 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.196 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.197 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.197 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.outgoing.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.197 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.outgoing.packets volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.197 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.198 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-19T20:43:15.196870) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.198 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.198 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fa4f672bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.199 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.199 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.199 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.199 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.199 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-19T20:43:15.199517) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.199 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.incoming.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.200 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.200 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.200 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.201 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.201 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fa4f672a270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.201 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.201 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.201 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.201 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.202 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.202 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.outgoing.bytes volume: 3390 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.202 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.outgoing.bytes volume: 15886 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.202 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-19T20:43:15.201904) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.203 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.203 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.203 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fa4f6728ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.204 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.204 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.204 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.204 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.204 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-19T20:43:15.204312) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.221 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.257 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.276 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.301 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.301 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.302 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fa4f672a300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.302 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.302 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.302 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.302 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.302 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.302 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.303 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.303 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.303 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.303 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fa4f672ab70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.303 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.303 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.304 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.304 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.304 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-19T20:43:15.302412) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.304 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-19T20:43:15.304053) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.316 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.317 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.331 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.331 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.344 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.344 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.358 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.358 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.359 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.359 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fa4f6728290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.359 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.359 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.359 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.359 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.361 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-19T20:43:15.359838) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:43:15 compute-0 podman[256582]: 2026-02-19 20:43:15.365474939 +0000 UTC m=+0.053997498 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.390 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.read.bytes volume: 29568000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.390 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.426 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.bytes volume: 30759424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.426 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.459 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.bytes volume: 30591488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.459 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.bytes volume: 274750 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.495 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.read.bytes volume: 31070720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.496 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.496 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.496 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fa4f69216a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.496 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.496 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.496 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.497 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.497 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/cpu volume: 160490000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.497 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/cpu volume: 36880000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.497 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/cpu volume: 36560000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.497 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-19T20:43:15.497003) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.497 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/cpu volume: 332510000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.498 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.498 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fa4f67286b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.498 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.498 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fa4f67283b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.498 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.498 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.498 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.498 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.499 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.read.latency volume: 816188800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.499 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.read.latency volume: 168782704 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.499 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.latency volume: 893810108 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.499 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.latency volume: 72441655 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.499 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.latency volume: 1154094577 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.500 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.latency volume: 68730024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.500 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.read.latency volume: 916964403 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.500 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.read.latency volume: 88997503 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.501 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.501 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fa4f672a120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.501 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-19T20:43:15.498921) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.501 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.501 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.501 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.501 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.501 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.501 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.502 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-19T20:43:15.501715) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.502 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.502 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.502 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.502 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fa4f672a1b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.502 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.503 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.503 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.503 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.503 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.503 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.503 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.503 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.504 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.504 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fa4f6728410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.504 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-19T20:43:15.503125) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.504 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.504 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.504 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.504 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.504 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.read.requests volume: 1061 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.504 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.505 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.requests volume: 1111 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.505 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.505 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.requests volume: 1098 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.505 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.requests volume: 108 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.505 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.read.requests volume: 1136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.505 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.506 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.506 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fa4f672a150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.506 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-19T20:43:15.504658) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.506 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.506 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.506 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.506 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.507 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.507 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.507 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.incoming.packets volume: 115 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.507 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.507 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.507 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fa4f6728470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.508 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.508 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.508 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.508 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.508 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.508 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.508 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.508 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.509 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.509 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.509 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.usage volume: 30081024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.509 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.509 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.510 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fa4f68f6030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.510 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.510 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.510 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.510 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.510 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.write.bytes volume: 72871936 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.510 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.510 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.bytes volume: 73101312 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.510 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.511 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.bytes volume: 73109504 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.511 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.511 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.write.bytes volume: 73187328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.511 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.512 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.512 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-19T20:43:15.506890) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.512 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fa4f672ab10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.512 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-19T20:43:15.508231) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.512 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.512 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.512 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-19T20:43:15.510330) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.512 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.512 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.513 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.513 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.513 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.513 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.513 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.513 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.514 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.allocation volume: 31006720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.514 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.514 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.514 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fa4f6728500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.514 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-19T20:43:15.512934) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.514 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.514 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.515 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.515 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.515 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.write.latency volume: 3416102538 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.515 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.515 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.latency volume: 3085349853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.515 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.515 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.latency volume: 15219964748 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.516 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.516 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.write.latency volume: 3724754156 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.516 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.516 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.517 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fa4f672a0c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.517 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-19T20:43:15.515060) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.517 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.517 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.517 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.517 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.517 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.517 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.517 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-19T20:43:15.517441) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.518 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.518 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.518 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.518 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fa4f6728560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.518 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.518 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.519 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.519 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.519 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.write.requests volume: 324 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.519 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.519 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.requests volume: 328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.519 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.520 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.requests volume: 306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.520 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.520 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.write.requests volume: 308 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.520 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.521 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.521 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fa4f67285c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.521 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-19T20:43:15.519074) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.521 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.521 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.521 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.521 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.522 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.522 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fa4f6728620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.522 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-19T20:43:15.521724) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.522 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.522 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.522 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.522 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.523 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.523 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fa4f672be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.523 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-19T20:43:15.522868) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.523 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.523 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.523 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.523 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.524 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/memory.usage volume: 47.10546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.524 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/memory.usage volume: 42.71875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.524 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/memory.usage volume: 42.80859375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.524 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/memory.usage volume: 42.23828125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.524 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.524 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fa4f672be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.525 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.525 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.525 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-19T20:43:15.523947) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.525 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.525 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.525 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.525 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.incoming.bytes volume: 4343 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.525 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.incoming.bytes volume: 20170 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.526 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.526 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.526 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-19T20:43:15.525368) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.527 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.527 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.527 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.527 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.527 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.527 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.527 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.527 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.528 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.528 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.528 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.528 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.528 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.528 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.528 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.528 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.528 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.528 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.528 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.528 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.529 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.529 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.529 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.529 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.529 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:43:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:43:15.529 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:43:15 compute-0 nova_compute[188777]: 2026-02-19 20:43:15.799 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:43:17 compute-0 podman[256607]: 2026-02-19 20:43:17.384844087 +0000 UTC m=+0.073854418 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0)
Feb 19 20:43:19 compute-0 nova_compute[188777]: 2026-02-19 20:43:19.000 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:43:19 compute-0 podman[256626]: 2026-02-19 20:43:19.419970669 +0000 UTC m=+0.108855142 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, managed_by=edpm_ansible)
Feb 19 20:43:20 compute-0 nova_compute[188777]: 2026-02-19 20:43:20.802 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:43:24 compute-0 nova_compute[188777]: 2026-02-19 20:43:24.000 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:43:25 compute-0 nova_compute[188777]: 2026-02-19 20:43:25.806 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:43:29 compute-0 nova_compute[188777]: 2026-02-19 20:43:29.002 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:43:29 compute-0 podman[204724]: time="2026-02-19T20:43:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:43:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:43:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31705 "" "Go-http-client/1.1"
Feb 19 20:43:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:43:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5307 "" "Go-http-client/1.1"
Feb 19 20:43:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:43:30.461 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:43:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:43:30.462 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:43:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:43:30.463 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:43:30 compute-0 nova_compute[188777]: 2026-02-19 20:43:30.809 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:43:31 compute-0 openstack_network_exporter[207898]: ERROR   20:43:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:43:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:43:31 compute-0 openstack_network_exporter[207898]: ERROR   20:43:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:43:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:43:34 compute-0 nova_compute[188777]: 2026-02-19 20:43:34.005 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:43:35 compute-0 podman[256657]: 2026-02-19 20:43:35.385360106 +0000 UTC m=+0.071490535 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 19 20:43:35 compute-0 podman[256656]: 2026-02-19 20:43:35.39753627 +0000 UTC m=+0.089427376 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, name=ubi9/ubi-minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, config_id=openstack_network_exporter, distribution-scope=public, version=9.7, build-date=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-type=git, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 19 20:43:35 compute-0 nova_compute[188777]: 2026-02-19 20:43:35.812 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:43:37 compute-0 podman[256700]: 2026-02-19 20:43:37.408783428 +0000 UTC m=+0.082475642 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Feb 19 20:43:39 compute-0 nova_compute[188777]: 2026-02-19 20:43:39.007 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:43:40 compute-0 nova_compute[188777]: 2026-02-19 20:43:40.815 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:43:43 compute-0 podman[256720]: 2026-02-19 20:43:43.393756629 +0000 UTC m=+0.073109305 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi)
Feb 19 20:43:43 compute-0 podman[256719]: 2026-02-19 20:43:43.406893132 +0000 UTC m=+0.093825091 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, architecture=x86_64, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, config_id=kepler, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30)
Feb 19 20:43:44 compute-0 nova_compute[188777]: 2026-02-19 20:43:44.011 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:43:45 compute-0 nova_compute[188777]: 2026-02-19 20:43:45.819 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:43:46 compute-0 podman[256758]: 2026-02-19 20:43:46.405800334 +0000 UTC m=+0.088579770 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 19 20:43:48 compute-0 podman[256781]: 2026-02-19 20:43:48.412485073 +0000 UTC m=+0.093979325 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260216, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 19 20:43:49 compute-0 nova_compute[188777]: 2026-02-19 20:43:49.012 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:43:50 compute-0 podman[256803]: 2026-02-19 20:43:50.40380348 +0000 UTC m=+0.091178550 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Feb 19 20:43:50 compute-0 nova_compute[188777]: 2026-02-19 20:43:50.822 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:43:54 compute-0 nova_compute[188777]: 2026-02-19 20:43:54.017 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:43:55 compute-0 nova_compute[188777]: 2026-02-19 20:43:55.826 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:43:56 compute-0 nova_compute[188777]: 2026-02-19 20:43:56.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:43:56 compute-0 nova_compute[188777]: 2026-02-19 20:43:56.263 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:43:58 compute-0 nova_compute[188777]: 2026-02-19 20:43:58.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:43:59 compute-0 nova_compute[188777]: 2026-02-19 20:43:59.020 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:43:59 compute-0 nova_compute[188777]: 2026-02-19 20:43:59.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:43:59 compute-0 podman[204724]: time="2026-02-19T20:43:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:43:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:43:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31705 "" "Go-http-client/1.1"
Feb 19 20:43:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:43:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5303 "" "Go-http-client/1.1"
Feb 19 20:44:00 compute-0 nova_compute[188777]: 2026-02-19 20:44:00.830 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:44:01 compute-0 openstack_network_exporter[207898]: ERROR   20:44:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:44:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:44:01 compute-0 openstack_network_exporter[207898]: ERROR   20:44:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:44:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:44:03 compute-0 nova_compute[188777]: 2026-02-19 20:44:03.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:44:03 compute-0 nova_compute[188777]: 2026-02-19 20:44:03.264 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:44:04 compute-0 nova_compute[188777]: 2026-02-19 20:44:04.024 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:44:04 compute-0 nova_compute[188777]: 2026-02-19 20:44:04.650 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "refresh_cache-c7d04a5a-1e2f-40c2-a686-18b23a5bddfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:44:04 compute-0 nova_compute[188777]: 2026-02-19 20:44:04.650 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquired lock "refresh_cache-c7d04a5a-1e2f-40c2-a686-18b23a5bddfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:44:04 compute-0 nova_compute[188777]: 2026-02-19 20:44:04.650 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 19 20:44:05 compute-0 nova_compute[188777]: 2026-02-19 20:44:05.836 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:44:06 compute-0 podman[256832]: 2026-02-19 20:44:06.370824207 +0000 UTC m=+0.054258636 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 19 20:44:06 compute-0 podman[256831]: 2026-02-19 20:44:06.402759947 +0000 UTC m=+0.089824108 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, distribution-scope=public, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, version=9.7, build-date=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, container_name=openstack_network_exporter, managed_by=edpm_ansible)
Feb 19 20:44:07 compute-0 nova_compute[188777]: 2026-02-19 20:44:07.664 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Updating instance_info_cache with network_info: [{"id": "6730c115-fc6d-4fab-9c7d-1f6f4bd9e878", "address": "fa:16:3e:b9:4e:00", "network": {"id": "03b0387c-cb4d-416d-b212-4d980b66cbe2", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.124", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e54c3b3dadc42fca16da4cb7212a2db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6730c115-fc", "ovs_interfaceid": "6730c115-fc6d-4fab-9c7d-1f6f4bd9e878", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:44:07 compute-0 nova_compute[188777]: 2026-02-19 20:44:07.683 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Releasing lock "refresh_cache-c7d04a5a-1e2f-40c2-a686-18b23a5bddfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:44:07 compute-0 nova_compute[188777]: 2026-02-19 20:44:07.684 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 19 20:44:07 compute-0 nova_compute[188777]: 2026-02-19 20:44:07.684 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:44:07 compute-0 nova_compute[188777]: 2026-02-19 20:44:07.712 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:44:07 compute-0 nova_compute[188777]: 2026-02-19 20:44:07.713 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:44:07 compute-0 nova_compute[188777]: 2026-02-19 20:44:07.713 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:44:07 compute-0 nova_compute[188777]: 2026-02-19 20:44:07.714 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:44:07 compute-0 nova_compute[188777]: 2026-02-19 20:44:07.795 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:44:07 compute-0 nova_compute[188777]: 2026-02-19 20:44:07.851 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:44:07 compute-0 nova_compute[188777]: 2026-02-19 20:44:07.852 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:44:07 compute-0 nova_compute[188777]: 2026-02-19 20:44:07.908 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:44:07 compute-0 nova_compute[188777]: 2026-02-19 20:44:07.913 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:44:07 compute-0 nova_compute[188777]: 2026-02-19 20:44:07.962 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:44:07 compute-0 nova_compute[188777]: 2026-02-19 20:44:07.963 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:44:08 compute-0 nova_compute[188777]: 2026-02-19 20:44:08.014 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:44:08 compute-0 nova_compute[188777]: 2026-02-19 20:44:08.021 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:44:08 compute-0 nova_compute[188777]: 2026-02-19 20:44:08.066 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:44:08 compute-0 nova_compute[188777]: 2026-02-19 20:44:08.068 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:44:08 compute-0 nova_compute[188777]: 2026-02-19 20:44:08.117 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:44:08 compute-0 nova_compute[188777]: 2026-02-19 20:44:08.123 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:44:08 compute-0 nova_compute[188777]: 2026-02-19 20:44:08.174 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:44:08 compute-0 nova_compute[188777]: 2026-02-19 20:44:08.176 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:44:08 compute-0 nova_compute[188777]: 2026-02-19 20:44:08.235 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:44:08 compute-0 podman[256897]: 2026-02-19 20:44:08.407638701 +0000 UTC m=+0.093673547 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 20:44:08 compute-0 nova_compute[188777]: 2026-02-19 20:44:08.675 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:44:08 compute-0 nova_compute[188777]: 2026-02-19 20:44:08.676 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4624MB free_disk=72.05458450317383GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:44:08 compute-0 nova_compute[188777]: 2026-02-19 20:44:08.677 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:44:08 compute-0 nova_compute[188777]: 2026-02-19 20:44:08.677 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:44:08 compute-0 nova_compute[188777]: 2026-02-19 20:44:08.777 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 997ebdcf-7eab-485b-8fbf-d21112c78946 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:44:08 compute-0 nova_compute[188777]: 2026-02-19 20:44:08.777 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance dff9d513-54f8-4d73-acf7-df610dc4d064 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:44:08 compute-0 nova_compute[188777]: 2026-02-19 20:44:08.778 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 1b6b1397-fda7-4470-883b-1cc5974fac84 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:44:08 compute-0 nova_compute[188777]: 2026-02-19 20:44:08.778 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance c7d04a5a-1e2f-40c2-a686-18b23a5bddfa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:44:08 compute-0 nova_compute[188777]: 2026-02-19 20:44:08.778 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:44:08 compute-0 nova_compute[188777]: 2026-02-19 20:44:08.779 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:44:08 compute-0 nova_compute[188777]: 2026-02-19 20:44:08.792 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Refreshing inventories for resource provider c266959e-952e-41ad-bc2e-56513f39ec2d _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 19 20:44:08 compute-0 nova_compute[188777]: 2026-02-19 20:44:08.808 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Updating ProviderTree inventory for provider c266959e-952e-41ad-bc2e-56513f39ec2d from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 19 20:44:08 compute-0 nova_compute[188777]: 2026-02-19 20:44:08.809 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Updating inventory in ProviderTree for provider c266959e-952e-41ad-bc2e-56513f39ec2d with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 19 20:44:08 compute-0 nova_compute[188777]: 2026-02-19 20:44:08.822 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Refreshing aggregate associations for resource provider c266959e-952e-41ad-bc2e-56513f39ec2d, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 19 20:44:08 compute-0 nova_compute[188777]: 2026-02-19 20:44:08.849 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Refreshing trait associations for resource provider c266959e-952e-41ad-bc2e-56513f39ec2d, traits: HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,HW_CPU_X86_SHA,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NODE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,HW_CPU_X86_F16C,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_ABM,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_MMX,HW_CPU_X86_BMI2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 19 20:44:08 compute-0 nova_compute[188777]: 2026-02-19 20:44:08.944 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:44:08 compute-0 nova_compute[188777]: 2026-02-19 20:44:08.961 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:44:08 compute-0 nova_compute[188777]: 2026-02-19 20:44:08.963 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:44:08 compute-0 nova_compute[188777]: 2026-02-19 20:44:08.963 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.286s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:44:09 compute-0 nova_compute[188777]: 2026-02-19 20:44:09.027 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:44:09 compute-0 nova_compute[188777]: 2026-02-19 20:44:09.543 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:44:09 compute-0 nova_compute[188777]: 2026-02-19 20:44:09.544 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:44:09 compute-0 nova_compute[188777]: 2026-02-19 20:44:09.544 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:44:09 compute-0 nova_compute[188777]: 2026-02-19 20:44:09.544 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:44:10 compute-0 nova_compute[188777]: 2026-02-19 20:44:10.839 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:44:13 compute-0 sshd-session[256829]: Connection closed by 103.119.94.10 port 46070 [preauth]
Feb 19 20:44:14 compute-0 nova_compute[188777]: 2026-02-19 20:44:14.029 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:44:14 compute-0 podman[256916]: 2026-02-19 20:44:14.389509075 +0000 UTC m=+0.077720526 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 19 20:44:14 compute-0 podman[256915]: 2026-02-19 20:44:14.400922635 +0000 UTC m=+0.090439397 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, version=9.4, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.buildah.version=1.29.0, config_id=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, name=ubi9, io.openshift.expose-services=, release-0.7.12=, release=1214.1726694543, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc.)
Feb 19 20:44:15 compute-0 nova_compute[188777]: 2026-02-19 20:44:15.844 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:44:17 compute-0 podman[256952]: 2026-02-19 20:44:17.381630369 +0000 UTC m=+0.065654826 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 19 20:44:19 compute-0 nova_compute[188777]: 2026-02-19 20:44:19.032 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:44:19 compute-0 podman[256976]: 2026-02-19 20:44:19.367375405 +0000 UTC m=+0.059468836 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.43.0, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0)
Feb 19 20:44:20 compute-0 nova_compute[188777]: 2026-02-19 20:44:20.847 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:44:21 compute-0 podman[256998]: 2026-02-19 20:44:21.422254473 +0000 UTC m=+0.115018881 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.034 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.265 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.266 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.267 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.268 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.269 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.269 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.292 188781 DEBUG nova.virt.libvirt.imagecache [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.309 188781 DEBUG nova.virt.libvirt.imagecache [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.310 188781 DEBUG nova.virt.libvirt.imagecache [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Image id e98a7b34-d7ef-4dcd-b1f3-0a369d480f18 yields fingerprint c4978917f5870b26b06a12225871f7dbd3da64fb _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.310 188781 INFO nova.virt.libvirt.imagecache [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] image e98a7b34-d7ef-4dcd-b1f3-0a369d480f18 at (/var/lib/nova/instances/_base/c4978917f5870b26b06a12225871f7dbd3da64fb): checking
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.311 188781 DEBUG nova.virt.libvirt.imagecache [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] image e98a7b34-d7ef-4dcd-b1f3-0a369d480f18 at (/var/lib/nova/instances/_base/c4978917f5870b26b06a12225871f7dbd3da64fb): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.313 188781 DEBUG nova.virt.libvirt.imagecache [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.313 188781 DEBUG nova.virt.libvirt.imagecache [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Image id 17b9bce8-a91b-495d-ac33-cf63893413f9 yields fingerprint a9fd80910f614000293e8e5ea927829d2f3ef59c _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.314 188781 INFO nova.virt.libvirt.imagecache [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] image 17b9bce8-a91b-495d-ac33-cf63893413f9 at (/var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c): checking
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.314 188781 DEBUG nova.virt.libvirt.imagecache [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] image 17b9bce8-a91b-495d-ac33-cf63893413f9 at (/var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.316 188781 DEBUG nova.virt.libvirt.imagecache [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] 997ebdcf-7eab-485b-8fbf-d21112c78946 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.316 188781 DEBUG nova.virt.libvirt.imagecache [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] 997ebdcf-7eab-485b-8fbf-d21112c78946 has a disk file _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:129
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.316 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.379 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.380 188781 DEBUG nova.virt.libvirt.imagecache [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 997ebdcf-7eab-485b-8fbf-d21112c78946 is backed by a9fd80910f614000293e8e5ea927829d2f3ef59c _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:141
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.381 188781 DEBUG nova.virt.libvirt.imagecache [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] dff9d513-54f8-4d73-acf7-df610dc4d064 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.381 188781 DEBUG nova.virt.libvirt.imagecache [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] dff9d513-54f8-4d73-acf7-df610dc4d064 has a disk file _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:129
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.382 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.427 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.428 188781 DEBUG nova.virt.libvirt.imagecache [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance dff9d513-54f8-4d73-acf7-df610dc4d064 is backed by a9fd80910f614000293e8e5ea927829d2f3ef59c _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:141
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.428 188781 DEBUG nova.virt.libvirt.imagecache [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] 1b6b1397-fda7-4470-883b-1cc5974fac84 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.429 188781 DEBUG nova.virt.libvirt.imagecache [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] 1b6b1397-fda7-4470-883b-1cc5974fac84 has a disk file _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:129
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.429 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.476 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.477 188781 DEBUG nova.virt.libvirt.imagecache [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 1b6b1397-fda7-4470-883b-1cc5974fac84 is backed by c4978917f5870b26b06a12225871f7dbd3da64fb _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:141
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.477 188781 DEBUG nova.virt.libvirt.imagecache [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.478 188781 DEBUG nova.virt.libvirt.imagecache [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa has a disk file _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:129
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.478 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.544 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.545 188781 DEBUG nova.virt.libvirt.imagecache [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance c7d04a5a-1e2f-40c2-a686-18b23a5bddfa is backed by c4978917f5870b26b06a12225871f7dbd3da64fb _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:141
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.546 188781 WARNING nova.virt.libvirt.imagecache [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.546 188781 WARNING nova.virt.libvirt.imagecache [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/9c0494affe141b25c092c57304050b881d108640
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.547 188781 INFO nova.virt.libvirt.imagecache [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Active base files: /var/lib/nova/instances/_base/c4978917f5870b26b06a12225871f7dbd3da64fb /var/lib/nova/instances/_base/a9fd80910f614000293e8e5ea927829d2f3ef59c
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.547 188781 INFO nova.virt.libvirt.imagecache [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Removable base files: /var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a /var/lib/nova/instances/_base/9c0494affe141b25c092c57304050b881d108640
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.548 188781 INFO nova.virt.libvirt.imagecache [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ab3f72be2a6a58a25574f1d71543e651d74a575a
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.549 188781 INFO nova.virt.libvirt.imagecache [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/9c0494affe141b25c092c57304050b881d108640
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.549 188781 DEBUG nova.virt.libvirt.imagecache [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.550 188781 DEBUG nova.virt.libvirt.imagecache [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.550 188781 DEBUG nova.virt.libvirt.imagecache [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Feb 19 20:44:24 compute-0 nova_compute[188777]: 2026-02-19 20:44:24.551 188781 INFO nova.virt.libvirt.imagecache [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Feb 19 20:44:25 compute-0 nova_compute[188777]: 2026-02-19 20:44:25.851 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:44:29 compute-0 nova_compute[188777]: 2026-02-19 20:44:29.037 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:44:29 compute-0 podman[204724]: time="2026-02-19T20:44:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:44:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:44:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31705 "" "Go-http-client/1.1"
Feb 19 20:44:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:44:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5316 "" "Go-http-client/1.1"
Feb 19 20:44:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:44:30.463 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:44:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:44:30.464 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:44:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:44:30.465 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:44:30 compute-0 nova_compute[188777]: 2026-02-19 20:44:30.854 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:44:31 compute-0 openstack_network_exporter[207898]: ERROR   20:44:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:44:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:44:31 compute-0 openstack_network_exporter[207898]: ERROR   20:44:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:44:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:44:34 compute-0 nova_compute[188777]: 2026-02-19 20:44:34.039 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:44:35 compute-0 nova_compute[188777]: 2026-02-19 20:44:35.859 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:44:37 compute-0 podman[257036]: 2026-02-19 20:44:37.37167909 +0000 UTC m=+0.058752874 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 19 20:44:37 compute-0 podman[257035]: 2026-02-19 20:44:37.376929401 +0000 UTC m=+0.068960437 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.created=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.7, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git)
Feb 19 20:44:39 compute-0 nova_compute[188777]: 2026-02-19 20:44:39.041 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:44:39 compute-0 podman[257076]: 2026-02-19 20:44:39.405725688 +0000 UTC m=+0.090862269 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 20:44:40 compute-0 nova_compute[188777]: 2026-02-19 20:44:40.862 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:44:44 compute-0 nova_compute[188777]: 2026-02-19 20:44:44.042 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:44:44 compute-0 podman[257095]: 2026-02-19 20:44:44.737610494 +0000 UTC m=+0.071141575 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, io.buildah.version=1.29.0, release-0.7.12=, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, maintainer=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible)
Feb 19 20:44:44 compute-0 podman[257096]: 2026-02-19 20:44:44.75803038 +0000 UTC m=+0.089240900 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible)
Feb 19 20:44:45 compute-0 nova_compute[188777]: 2026-02-19 20:44:45.867 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:44:48 compute-0 podman[257135]: 2026-02-19 20:44:48.397325177 +0000 UTC m=+0.077479070 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 19 20:44:49 compute-0 nova_compute[188777]: 2026-02-19 20:44:49.046 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:44:50 compute-0 podman[257160]: 2026-02-19 20:44:50.402990354 +0000 UTC m=+0.091160299 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, managed_by=edpm_ansible, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2)
Feb 19 20:44:50 compute-0 nova_compute[188777]: 2026-02-19 20:44:50.870 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:44:52 compute-0 podman[257180]: 2026-02-19 20:44:52.405808765 +0000 UTC m=+0.092234342 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Feb 19 20:44:54 compute-0 nova_compute[188777]: 2026-02-19 20:44:54.047 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:44:55 compute-0 nova_compute[188777]: 2026-02-19 20:44:55.875 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:44:57 compute-0 nova_compute[188777]: 2026-02-19 20:44:57.550 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:44:57 compute-0 nova_compute[188777]: 2026-02-19 20:44:57.552 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:44:58 compute-0 nova_compute[188777]: 2026-02-19 20:44:58.266 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:44:59 compute-0 nova_compute[188777]: 2026-02-19 20:44:59.052 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:44:59 compute-0 sshd-session[257206]: Received disconnect from 158.180.74.7 port 8262:11: Bye Bye [preauth]
Feb 19 20:44:59 compute-0 sshd-session[257206]: Disconnected from authenticating user root 158.180.74.7 port 8262 [preauth]
Feb 19 20:44:59 compute-0 podman[204724]: time="2026-02-19T20:44:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:44:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:44:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31705 "" "Go-http-client/1.1"
Feb 19 20:44:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:44:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5321 "" "Go-http-client/1.1"
Feb 19 20:45:00 compute-0 nova_compute[188777]: 2026-02-19 20:45:00.879 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:45:01 compute-0 nova_compute[188777]: 2026-02-19 20:45:01.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:45:01 compute-0 openstack_network_exporter[207898]: ERROR   20:45:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:45:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:45:01 compute-0 openstack_network_exporter[207898]: ERROR   20:45:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:45:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:45:03 compute-0 nova_compute[188777]: 2026-02-19 20:45:03.266 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:45:03 compute-0 nova_compute[188777]: 2026-02-19 20:45:03.267 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:45:03 compute-0 nova_compute[188777]: 2026-02-19 20:45:03.268 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 19 20:45:03 compute-0 nova_compute[188777]: 2026-02-19 20:45:03.648 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "refresh_cache-997ebdcf-7eab-485b-8fbf-d21112c78946" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:45:03 compute-0 nova_compute[188777]: 2026-02-19 20:45:03.649 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquired lock "refresh_cache-997ebdcf-7eab-485b-8fbf-d21112c78946" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:45:03 compute-0 nova_compute[188777]: 2026-02-19 20:45:03.650 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 19 20:45:03 compute-0 nova_compute[188777]: 2026-02-19 20:45:03.650 188781 DEBUG nova.objects.instance [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 997ebdcf-7eab-485b-8fbf-d21112c78946 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:45:04 compute-0 nova_compute[188777]: 2026-02-19 20:45:04.052 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:45:04 compute-0 nova_compute[188777]: 2026-02-19 20:45:04.813 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Updating instance_info_cache with network_info: [{"id": "44b4451c-db39-42a3-a2c6-5c8c42d1669b", "address": "fa:16:3e:f7:60:ee", "network": {"id": "ef3fe901-c03c-42fd-97b9-c1f0218f248b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-572210270-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54ce0de2bf12421a9458013ccaa2dcad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44b4451c-db", "ovs_interfaceid": "44b4451c-db39-42a3-a2c6-5c8c42d1669b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:45:04 compute-0 nova_compute[188777]: 2026-02-19 20:45:04.826 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Releasing lock "refresh_cache-997ebdcf-7eab-485b-8fbf-d21112c78946" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:45:04 compute-0 nova_compute[188777]: 2026-02-19 20:45:04.827 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 19 20:45:05 compute-0 nova_compute[188777]: 2026-02-19 20:45:05.262 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:45:05 compute-0 nova_compute[188777]: 2026-02-19 20:45:05.485 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:45:05 compute-0 nova_compute[188777]: 2026-02-19 20:45:05.510 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:45:05 compute-0 nova_compute[188777]: 2026-02-19 20:45:05.511 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:45:05 compute-0 nova_compute[188777]: 2026-02-19 20:45:05.512 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:45:05 compute-0 nova_compute[188777]: 2026-02-19 20:45:05.512 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:45:05 compute-0 nova_compute[188777]: 2026-02-19 20:45:05.632 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:45:05 compute-0 nova_compute[188777]: 2026-02-19 20:45:05.689 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:45:05 compute-0 nova_compute[188777]: 2026-02-19 20:45:05.691 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:45:05 compute-0 nova_compute[188777]: 2026-02-19 20:45:05.745 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:45:05 compute-0 nova_compute[188777]: 2026-02-19 20:45:05.757 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:45:05 compute-0 nova_compute[188777]: 2026-02-19 20:45:05.841 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:45:05 compute-0 nova_compute[188777]: 2026-02-19 20:45:05.843 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:45:05 compute-0 nova_compute[188777]: 2026-02-19 20:45:05.884 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:45:05 compute-0 nova_compute[188777]: 2026-02-19 20:45:05.904 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:45:05 compute-0 nova_compute[188777]: 2026-02-19 20:45:05.917 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:45:05 compute-0 nova_compute[188777]: 2026-02-19 20:45:05.976 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:45:05 compute-0 nova_compute[188777]: 2026-02-19 20:45:05.978 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:45:06 compute-0 nova_compute[188777]: 2026-02-19 20:45:06.038 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:45:06 compute-0 nova_compute[188777]: 2026-02-19 20:45:06.046 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:45:06 compute-0 nova_compute[188777]: 2026-02-19 20:45:06.095 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:45:06 compute-0 nova_compute[188777]: 2026-02-19 20:45:06.096 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:45:06 compute-0 nova_compute[188777]: 2026-02-19 20:45:06.145 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:45:06 compute-0 nova_compute[188777]: 2026-02-19 20:45:06.542 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:45:06 compute-0 nova_compute[188777]: 2026-02-19 20:45:06.543 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4645MB free_disk=72.05458068847656GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:45:06 compute-0 nova_compute[188777]: 2026-02-19 20:45:06.544 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:45:06 compute-0 nova_compute[188777]: 2026-02-19 20:45:06.544 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:45:06 compute-0 nova_compute[188777]: 2026-02-19 20:45:06.703 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 997ebdcf-7eab-485b-8fbf-d21112c78946 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:45:06 compute-0 nova_compute[188777]: 2026-02-19 20:45:06.703 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance dff9d513-54f8-4d73-acf7-df610dc4d064 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:45:06 compute-0 nova_compute[188777]: 2026-02-19 20:45:06.703 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 1b6b1397-fda7-4470-883b-1cc5974fac84 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:45:06 compute-0 nova_compute[188777]: 2026-02-19 20:45:06.703 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance c7d04a5a-1e2f-40c2-a686-18b23a5bddfa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:45:06 compute-0 nova_compute[188777]: 2026-02-19 20:45:06.704 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:45:06 compute-0 nova_compute[188777]: 2026-02-19 20:45:06.704 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:45:06 compute-0 nova_compute[188777]: 2026-02-19 20:45:06.842 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:45:06 compute-0 nova_compute[188777]: 2026-02-19 20:45:06.854 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:45:06 compute-0 nova_compute[188777]: 2026-02-19 20:45:06.856 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:45:06 compute-0 nova_compute[188777]: 2026-02-19 20:45:06.856 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.312s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:45:08 compute-0 nova_compute[188777]: 2026-02-19 20:45:08.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:45:08 compute-0 nova_compute[188777]: 2026-02-19 20:45:08.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:45:08 compute-0 nova_compute[188777]: 2026-02-19 20:45:08.264 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 19 20:45:08 compute-0 podman[257232]: 2026-02-19 20:45:08.392374071 +0000 UTC m=+0.069446362 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., distribution-scope=public, build-date=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, name=ubi9/ubi-minimal, release=1770267347, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., version=9.7, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Feb 19 20:45:08 compute-0 podman[257233]: 2026-02-19 20:45:08.454603011 +0000 UTC m=+0.120805159 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 19 20:45:09 compute-0 nova_compute[188777]: 2026-02-19 20:45:09.054 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:45:09 compute-0 nova_compute[188777]: 2026-02-19 20:45:09.278 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:45:10 compute-0 nova_compute[188777]: 2026-02-19 20:45:10.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:45:10 compute-0 nova_compute[188777]: 2026-02-19 20:45:10.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:45:10 compute-0 podman[257274]: 2026-02-19 20:45:10.396363848 +0000 UTC m=+0.080559434 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127)
Feb 19 20:45:10 compute-0 nova_compute[188777]: 2026-02-19 20:45:10.895 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:45:14 compute-0 nova_compute[188777]: 2026-02-19 20:45:14.058 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.154 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.154 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.154 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.155 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fa4f6728800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.156 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.156 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.156 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.156 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.156 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.156 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a390>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.156 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.156 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.156 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.156 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.156 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.157 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.157 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.157 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.157 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.157 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.157 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.157 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.157 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.158 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.158 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.160 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'c7d04a5a-1e2f-40c2-a686-18b23a5bddfa', 'name': 'te-4749372-asg-gqiuwwiovj7t-a22kewlbuwbg-ig3ypn6zxo3u', 'flavor': {'id': '68c4e072-7c2b-48a1-8e07-0fd69e153270', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'e98a7b34-d7ef-4dcd-b1f3-0a369d480f18'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000d', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3e54c3b3dadc42fca16da4cb7212a2db', 'user_id': '4495bf20aedd42ff97fdae62ef729522', 'hostId': '22c8c0ddb7108a2907037af7b4f06c9d19e2238520664206bd96d609', 'status': 'active', 'metadata': {'metering.server_group': '08c5967c-a408-49e3-be73-425b7dd8ee8c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.162 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '997ebdcf-7eab-485b-8fbf-d21112c78946', 'name': 'tempest-AttachInterfacesUnderV243Test-server-684728485', 'flavor': {'id': '68c4e072-7c2b-48a1-8e07-0fd69e153270', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '17b9bce8-a91b-495d-ac33-cf63893413f9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000009', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '54ce0de2bf12421a9458013ccaa2dcad', 'user_id': '90c9e30d17534357bece36d1acaab39c', 'hostId': 'f46cf9989db3abf7517c94fba8fc996a8b55c81d8ccd61b23f3020bd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.164 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'dff9d513-54f8-4d73-acf7-df610dc4d064', 'name': 'tempest-TestNetworkBasicOps-server-215985627', 'flavor': {'id': '68c4e072-7c2b-48a1-8e07-0fd69e153270', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '17b9bce8-a91b-495d-ac33-cf63893413f9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'eb9e3732b9f4456d9f90bf3e156f6f7c', 'user_id': 'ef20d0162e404953a8f45beac9fadf18', 'hostId': 'f5b284f60221ec4908d310f9d0c4e0647a5dcc4e862839352782ffc8', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.167 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1b6b1397-fda7-4470-883b-1cc5974fac84', 'name': 'te-4749372-asg-gqiuwwiovj7t-inxwtqyxfrgl-i7ynim6swjio', 'flavor': {'id': '68c4e072-7c2b-48a1-8e07-0fd69e153270', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'e98a7b34-d7ef-4dcd-b1f3-0a369d480f18'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000c', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3e54c3b3dadc42fca16da4cb7212a2db', 'user_id': '4495bf20aedd42ff97fdae62ef729522', 'hostId': '22c8c0ddb7108a2907037af7b4f06c9d19e2238520664206bd96d609', 'status': 'active', 'metadata': {'metering.server_group': '08c5967c-a408-49e3-be73-425b7dd8ee8c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.168 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.168 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.168 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.168 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.169 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-19T20:45:15.168508) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.172 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.175 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.179 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.181 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.182 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.182 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fa4f672a480>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.182 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.182 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fa4f672a180>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.182 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.182 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.182 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.182 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.183 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.183 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.outgoing.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.183 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-19T20:45:15.182902) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.183 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.outgoing.packets volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.184 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.184 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.184 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fa4f672bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.184 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.184 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.184 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.185 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.185 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.185 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.185 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.185 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.186 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.186 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fa4f672a270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.186 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.186 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.187 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.187 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.187 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-19T20:45:15.184998) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.187 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.187 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-19T20:45:15.187285) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.187 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.outgoing.bytes volume: 3390 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.188 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.outgoing.bytes volume: 15886 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.188 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.188 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.188 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fa4f6728ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.188 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.189 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.189 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.189 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.189 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-19T20:45:15.189222) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.206 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.234 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.253 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.282 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.282 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.282 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fa4f672a300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.282 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.282 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.283 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.283 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.283 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.283 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-19T20:45:15.283191) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.283 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.284 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.284 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.284 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.284 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fa4f672ab70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.285 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.285 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.285 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.285 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.285 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-19T20:45:15.285391) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.301 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.302 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.321 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.322 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.338 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.339 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.354 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.355 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.355 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.355 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fa4f6728290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.355 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.356 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.356 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.356 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.356 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-19T20:45:15.356108) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:45:15 compute-0 podman[257293]: 2026-02-19 20:45:15.377549968 +0000 UTC m=+0.066075178 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, vendor=Red Hat, Inc., config_id=kepler, managed_by=edpm_ansible, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., release=1214.1726694543, build-date=2024-09-18T21:23:30, release-0.7.12=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4)
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.388 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.read.bytes volume: 29568000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.389 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 podman[257294]: 2026-02-19 20:45:15.421878758 +0000 UTC m=+0.105288562 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.428 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.bytes volume: 30759424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.429 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.461 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.bytes volume: 30591488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.461 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.bytes volume: 274750 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.493 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.read.bytes volume: 31070720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.494 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.494 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.494 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fa4f69216a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.494 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.495 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.495 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.495 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.495 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/cpu volume: 280200000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.495 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-19T20:45:15.495275) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.496 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/cpu volume: 38150000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.496 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/cpu volume: 37880000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.496 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/cpu volume: 333800000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.497 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.497 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fa4f67286b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.497 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.497 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fa4f67283b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.497 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.497 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.497 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.497 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.497 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.read.latency volume: 816188800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.498 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.read.latency volume: 168782704 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.498 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-19T20:45:15.497822) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.498 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.latency volume: 893810108 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.498 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.latency volume: 72441655 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.499 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.latency volume: 1154094577 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.499 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.latency volume: 68730024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.499 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.read.latency volume: 916964403 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.499 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.read.latency volume: 88997503 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.500 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.500 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fa4f672a120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.500 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.500 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.500 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.500 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.500 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.501 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-19T20:45:15.500730) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.501 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.501 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.501 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.502 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.502 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fa4f672a1b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.502 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.502 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.502 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.502 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.503 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.503 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-19T20:45:15.502895) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.503 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.503 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.504 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.504 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.504 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fa4f6728410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.504 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.504 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.504 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.505 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.505 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-19T20:45:15.505047) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.505 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.read.requests volume: 1061 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.505 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.506 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.requests volume: 1111 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.506 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.506 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.requests volume: 1098 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.506 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.requests volume: 108 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.506 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.read.requests volume: 1136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.507 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.507 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.507 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fa4f672a150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.507 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.507 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.508 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.508 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.508 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.508 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-19T20:45:15.508132) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.508 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.509 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.incoming.packets volume: 115 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.509 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.509 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.509 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fa4f6728470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.509 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.510 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.510 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.510 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.510 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-19T20:45:15.510189) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.510 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.510 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.511 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.511 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.511 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.511 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.511 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.usage volume: 30081024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.512 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.512 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.512 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fa4f68f6030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.512 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.513 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.513 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.513 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.513 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.write.bytes volume: 72871936 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.513 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.513 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.bytes volume: 73101312 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.514 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-19T20:45:15.513154) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.514 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.514 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.bytes volume: 73109504 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.514 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.515 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.write.bytes volume: 73191424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.515 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.515 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.515 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fa4f672ab10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.516 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.516 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.516 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.516 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.516 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-19T20:45:15.516327) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.516 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.517 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.517 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.517 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.517 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.517 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.518 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.allocation volume: 31006720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.518 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.518 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.519 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fa4f6728500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.519 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.519 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.519 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.519 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.519 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-19T20:45:15.519455) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.519 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.write.latency volume: 3416102538 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.520 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.520 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.latency volume: 3085349853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.520 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.520 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.latency volume: 15219964748 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.521 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.521 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.write.latency volume: 3725974324 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.521 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.522 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.522 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fa4f672a0c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.522 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.522 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.522 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.522 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.522 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.523 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.523 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.523 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.524 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.524 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fa4f6728560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.524 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-19T20:45:15.522581) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.524 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.524 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.524 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.525 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.525 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-19T20:45:15.524987) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.525 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.write.requests volume: 324 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.525 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.525 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.requests volume: 328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.526 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.526 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.requests volume: 306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.526 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.526 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.write.requests volume: 309 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.526 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.527 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.527 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fa4f67285c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.527 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.527 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.527 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.528 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.528 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.529 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fa4f6728620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.529 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.529 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.529 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.529 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.530 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.530 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-19T20:45:15.527979) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.530 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-19T20:45:15.529624) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.530 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fa4f672be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.531 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.531 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.531 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.531 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.531 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-19T20:45:15.531332) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.531 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/memory.usage volume: 47.10546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.532 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/memory.usage volume: 42.71875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.532 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/memory.usage volume: 42.80859375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.532 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/memory.usage volume: 42.234375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.532 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.533 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fa4f672be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.533 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.533 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.533 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.533 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.533 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.533 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.incoming.bytes volume: 4343 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.534 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.incoming.bytes volume: 20170 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.534 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.534 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.535 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.535 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-19T20:45:15.533384) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.535 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.535 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.535 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.535 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.535 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.535 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.535 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.535 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.536 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.536 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.536 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.536 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.536 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.536 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.536 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.536 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.536 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.536 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.536 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.536 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.536 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.536 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.536 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.536 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:45:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:45:15.537 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:45:15 compute-0 nova_compute[188777]: 2026-02-19 20:45:15.898 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:45:19 compute-0 nova_compute[188777]: 2026-02-19 20:45:19.060 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:45:19 compute-0 podman[257331]: 2026-02-19 20:45:19.393457495 +0000 UTC m=+0.082084161 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 19 20:45:20 compute-0 nova_compute[188777]: 2026-02-19 20:45:20.903 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:45:21 compute-0 nova_compute[188777]: 2026-02-19 20:45:21.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:45:21 compute-0 podman[257356]: 2026-02-19 20:45:21.388203647 +0000 UTC m=+0.082100591 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Feb 19 20:45:23 compute-0 podman[257376]: 2026-02-19 20:45:23.401994024 +0000 UTC m=+0.080568254 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Feb 19 20:45:24 compute-0 nova_compute[188777]: 2026-02-19 20:45:24.063 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:45:25 compute-0 nova_compute[188777]: 2026-02-19 20:45:25.907 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:45:27 compute-0 nova_compute[188777]: 2026-02-19 20:45:27.291 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:45:27 compute-0 nova_compute[188777]: 2026-02-19 20:45:27.291 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 19 20:45:27 compute-0 nova_compute[188777]: 2026-02-19 20:45:27.310 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 19 20:45:29 compute-0 nova_compute[188777]: 2026-02-19 20:45:29.065 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:45:29 compute-0 sshd-session[257401]: Invalid user ubuntu from 160.187.147.124 port 56318
Feb 19 20:45:29 compute-0 sshd-session[257401]: Received disconnect from 160.187.147.124 port 56318:11: Bye Bye [preauth]
Feb 19 20:45:29 compute-0 podman[204724]: time="2026-02-19T20:45:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:45:29 compute-0 sshd-session[257401]: Disconnected from invalid user ubuntu 160.187.147.124 port 56318 [preauth]
Feb 19 20:45:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:45:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31705 "" "Go-http-client/1.1"
Feb 19 20:45:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:45:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5312 "" "Go-http-client/1.1"
Feb 19 20:45:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:45:30.464 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:45:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:45:30.465 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:45:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:45:30.465 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:45:30 compute-0 nova_compute[188777]: 2026-02-19 20:45:30.908 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:45:31 compute-0 sshd-session[257403]: Received disconnect from 103.250.11.249 port 46870:11: Bye Bye [preauth]
Feb 19 20:45:31 compute-0 sshd-session[257403]: Disconnected from authenticating user root 103.250.11.249 port 46870 [preauth]
Feb 19 20:45:31 compute-0 openstack_network_exporter[207898]: ERROR   20:45:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:45:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:45:31 compute-0 openstack_network_exporter[207898]: ERROR   20:45:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:45:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:45:34 compute-0 nova_compute[188777]: 2026-02-19 20:45:34.069 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:45:35 compute-0 nova_compute[188777]: 2026-02-19 20:45:35.910 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:45:39 compute-0 nova_compute[188777]: 2026-02-19 20:45:39.070 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:45:39 compute-0 podman[257405]: 2026-02-19 20:45:39.374604463 +0000 UTC m=+0.061251562 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-type=git, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, maintainer=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9/ubi-minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=9.7, managed_by=edpm_ansible)
Feb 19 20:45:39 compute-0 podman[257406]: 2026-02-19 20:45:39.377630195 +0000 UTC m=+0.059314301 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 19 20:45:40 compute-0 nova_compute[188777]: 2026-02-19 20:45:40.914 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:45:41 compute-0 podman[257446]: 2026-02-19 20:45:41.375864045 +0000 UTC m=+0.063711687 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 20:45:44 compute-0 nova_compute[188777]: 2026-02-19 20:45:44.071 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:45:45 compute-0 nova_compute[188777]: 2026-02-19 20:45:45.294 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:45:45 compute-0 nova_compute[188777]: 2026-02-19 20:45:45.321 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Triggering sync for uuid 997ebdcf-7eab-485b-8fbf-d21112c78946 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 19 20:45:45 compute-0 nova_compute[188777]: 2026-02-19 20:45:45.322 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Triggering sync for uuid dff9d513-54f8-4d73-acf7-df610dc4d064 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 19 20:45:45 compute-0 nova_compute[188777]: 2026-02-19 20:45:45.322 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Triggering sync for uuid 1b6b1397-fda7-4470-883b-1cc5974fac84 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 19 20:45:45 compute-0 nova_compute[188777]: 2026-02-19 20:45:45.322 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Triggering sync for uuid c7d04a5a-1e2f-40c2-a686-18b23a5bddfa _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 19 20:45:45 compute-0 nova_compute[188777]: 2026-02-19 20:45:45.323 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "997ebdcf-7eab-485b-8fbf-d21112c78946" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:45:45 compute-0 nova_compute[188777]: 2026-02-19 20:45:45.323 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "997ebdcf-7eab-485b-8fbf-d21112c78946" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:45:45 compute-0 nova_compute[188777]: 2026-02-19 20:45:45.324 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "dff9d513-54f8-4d73-acf7-df610dc4d064" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:45:45 compute-0 nova_compute[188777]: 2026-02-19 20:45:45.324 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "dff9d513-54f8-4d73-acf7-df610dc4d064" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:45:45 compute-0 nova_compute[188777]: 2026-02-19 20:45:45.325 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "1b6b1397-fda7-4470-883b-1cc5974fac84" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:45:45 compute-0 nova_compute[188777]: 2026-02-19 20:45:45.325 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "1b6b1397-fda7-4470-883b-1cc5974fac84" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:45:45 compute-0 nova_compute[188777]: 2026-02-19 20:45:45.326 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "c7d04a5a-1e2f-40c2-a686-18b23a5bddfa" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:45:45 compute-0 nova_compute[188777]: 2026-02-19 20:45:45.326 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "c7d04a5a-1e2f-40c2-a686-18b23a5bddfa" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:45:45 compute-0 nova_compute[188777]: 2026-02-19 20:45:45.396 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "997ebdcf-7eab-485b-8fbf-d21112c78946" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.072s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:45:45 compute-0 nova_compute[188777]: 2026-02-19 20:45:45.400 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "dff9d513-54f8-4d73-acf7-df610dc4d064" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.076s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:45:45 compute-0 nova_compute[188777]: 2026-02-19 20:45:45.402 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "1b6b1397-fda7-4470-883b-1cc5974fac84" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.076s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:45:45 compute-0 nova_compute[188777]: 2026-02-19 20:45:45.414 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "c7d04a5a-1e2f-40c2-a686-18b23a5bddfa" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.088s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:45:45 compute-0 nova_compute[188777]: 2026-02-19 20:45:45.917 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:45:46 compute-0 podman[257466]: 2026-02-19 20:45:46.381141856 +0000 UTC m=+0.065229033 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, release-0.7.12=, architecture=x86_64, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, container_name=kepler, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.openshift.expose-services=, name=ubi9, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.openshift.tags=base rhel9)
Feb 19 20:45:46 compute-0 podman[257467]: 2026-02-19 20:45:46.388950706 +0000 UTC m=+0.070879157 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 19 20:45:49 compute-0 nova_compute[188777]: 2026-02-19 20:45:49.074 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:45:50 compute-0 podman[257506]: 2026-02-19 20:45:50.367617969 +0000 UTC m=+0.055017809 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 19 20:45:50 compute-0 nova_compute[188777]: 2026-02-19 20:45:50.921 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:45:52 compute-0 podman[257530]: 2026-02-19 20:45:52.383942994 +0000 UTC m=+0.070953809 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute)
Feb 19 20:45:54 compute-0 nova_compute[188777]: 2026-02-19 20:45:54.075 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:45:54 compute-0 podman[257548]: 2026-02-19 20:45:54.44103927 +0000 UTC m=+0.126969279 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:45:55 compute-0 nova_compute[188777]: 2026-02-19 20:45:55.925 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:45:59 compute-0 nova_compute[188777]: 2026-02-19 20:45:59.077 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:45:59 compute-0 nova_compute[188777]: 2026-02-19 20:45:59.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:45:59 compute-0 nova_compute[188777]: 2026-02-19 20:45:59.264 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:45:59 compute-0 podman[204724]: time="2026-02-19T20:45:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:45:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:45:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31705 "" "Go-http-client/1.1"
Feb 19 20:45:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:45:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5312 "" "Go-http-client/1.1"
Feb 19 20:46:00 compute-0 nova_compute[188777]: 2026-02-19 20:46:00.266 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:46:00 compute-0 nova_compute[188777]: 2026-02-19 20:46:00.929 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:46:01 compute-0 nova_compute[188777]: 2026-02-19 20:46:01.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:46:01 compute-0 openstack_network_exporter[207898]: ERROR   20:46:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:46:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:46:01 compute-0 openstack_network_exporter[207898]: ERROR   20:46:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:46:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:46:03 compute-0 nova_compute[188777]: 2026-02-19 20:46:03.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:46:03 compute-0 nova_compute[188777]: 2026-02-19 20:46:03.265 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:46:03 compute-0 nova_compute[188777]: 2026-02-19 20:46:03.669 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "refresh_cache-dff9d513-54f8-4d73-acf7-df610dc4d064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:46:03 compute-0 nova_compute[188777]: 2026-02-19 20:46:03.669 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquired lock "refresh_cache-dff9d513-54f8-4d73-acf7-df610dc4d064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:46:03 compute-0 nova_compute[188777]: 2026-02-19 20:46:03.670 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 19 20:46:04 compute-0 nova_compute[188777]: 2026-02-19 20:46:04.079 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:46:05 compute-0 nova_compute[188777]: 2026-02-19 20:46:05.337 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Updating instance_info_cache with network_info: [{"id": "913d86d2-685f-4393-9143-efa6e9c6941a", "address": "fa:16:3e:c2:a8:ee", "network": {"id": "2194f0b2-0b56-4fa1-a2f7-0ec7651876c4", "bridge": "br-int", "label": "tempest-network-smoke--1477620676", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb9e3732b9f4456d9f90bf3e156f6f7c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap913d86d2-68", "ovs_interfaceid": "913d86d2-685f-4393-9143-efa6e9c6941a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:46:05 compute-0 nova_compute[188777]: 2026-02-19 20:46:05.352 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Releasing lock "refresh_cache-dff9d513-54f8-4d73-acf7-df610dc4d064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:46:05 compute-0 nova_compute[188777]: 2026-02-19 20:46:05.353 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 19 20:46:05 compute-0 nova_compute[188777]: 2026-02-19 20:46:05.353 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:46:05 compute-0 nova_compute[188777]: 2026-02-19 20:46:05.382 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:46:05 compute-0 nova_compute[188777]: 2026-02-19 20:46:05.383 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:46:05 compute-0 nova_compute[188777]: 2026-02-19 20:46:05.383 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:46:05 compute-0 nova_compute[188777]: 2026-02-19 20:46:05.383 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:46:05 compute-0 nova_compute[188777]: 2026-02-19 20:46:05.461 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:46:05 compute-0 nova_compute[188777]: 2026-02-19 20:46:05.532 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:46:05 compute-0 nova_compute[188777]: 2026-02-19 20:46:05.533 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:46:05 compute-0 nova_compute[188777]: 2026-02-19 20:46:05.590 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:46:05 compute-0 nova_compute[188777]: 2026-02-19 20:46:05.600 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:46:05 compute-0 nova_compute[188777]: 2026-02-19 20:46:05.652 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:46:05 compute-0 nova_compute[188777]: 2026-02-19 20:46:05.654 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:46:05 compute-0 nova_compute[188777]: 2026-02-19 20:46:05.728 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:46:05 compute-0 nova_compute[188777]: 2026-02-19 20:46:05.735 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:46:05 compute-0 nova_compute[188777]: 2026-02-19 20:46:05.789 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:46:05 compute-0 nova_compute[188777]: 2026-02-19 20:46:05.790 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:46:05 compute-0 nova_compute[188777]: 2026-02-19 20:46:05.852 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:46:05 compute-0 nova_compute[188777]: 2026-02-19 20:46:05.860 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:46:05 compute-0 nova_compute[188777]: 2026-02-19 20:46:05.918 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:46:05 compute-0 nova_compute[188777]: 2026-02-19 20:46:05.919 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:46:05 compute-0 nova_compute[188777]: 2026-02-19 20:46:05.935 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:46:05 compute-0 nova_compute[188777]: 2026-02-19 20:46:05.989 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:46:06 compute-0 nova_compute[188777]: 2026-02-19 20:46:06.415 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:46:06 compute-0 nova_compute[188777]: 2026-02-19 20:46:06.417 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4617MB free_disk=72.05470275878906GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:46:06 compute-0 nova_compute[188777]: 2026-02-19 20:46:06.417 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:46:06 compute-0 nova_compute[188777]: 2026-02-19 20:46:06.417 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:46:06 compute-0 nova_compute[188777]: 2026-02-19 20:46:06.507 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 997ebdcf-7eab-485b-8fbf-d21112c78946 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:46:06 compute-0 nova_compute[188777]: 2026-02-19 20:46:06.507 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance dff9d513-54f8-4d73-acf7-df610dc4d064 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:46:06 compute-0 nova_compute[188777]: 2026-02-19 20:46:06.507 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 1b6b1397-fda7-4470-883b-1cc5974fac84 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:46:06 compute-0 nova_compute[188777]: 2026-02-19 20:46:06.508 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance c7d04a5a-1e2f-40c2-a686-18b23a5bddfa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:46:06 compute-0 nova_compute[188777]: 2026-02-19 20:46:06.508 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:46:06 compute-0 nova_compute[188777]: 2026-02-19 20:46:06.508 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:46:06 compute-0 nova_compute[188777]: 2026-02-19 20:46:06.704 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:46:06 compute-0 nova_compute[188777]: 2026-02-19 20:46:06.721 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:46:06 compute-0 nova_compute[188777]: 2026-02-19 20:46:06.722 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:46:06 compute-0 nova_compute[188777]: 2026-02-19 20:46:06.722 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.305s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:46:09 compute-0 nova_compute[188777]: 2026-02-19 20:46:09.082 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:46:10 compute-0 podman[257601]: 2026-02-19 20:46:10.381704147 +0000 UTC m=+0.068459662 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, architecture=x86_64, io.openshift.expose-services=, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, release=1770267347, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9)
Feb 19 20:46:10 compute-0 podman[257602]: 2026-02-19 20:46:10.425053497 +0000 UTC m=+0.100391082 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 19 20:46:10 compute-0 nova_compute[188777]: 2026-02-19 20:46:10.939 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:46:11 compute-0 nova_compute[188777]: 2026-02-19 20:46:11.633 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:46:11 compute-0 nova_compute[188777]: 2026-02-19 20:46:11.633 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:46:11 compute-0 nova_compute[188777]: 2026-02-19 20:46:11.633 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:46:11 compute-0 nova_compute[188777]: 2026-02-19 20:46:11.634 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:46:12 compute-0 podman[257646]: 2026-02-19 20:46:12.379392269 +0000 UTC m=+0.070579746 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 19 20:46:14 compute-0 nova_compute[188777]: 2026-02-19 20:46:14.083 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:46:15 compute-0 nova_compute[188777]: 2026-02-19 20:46:15.941 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:46:17 compute-0 podman[257667]: 2026-02-19 20:46:17.36775117 +0000 UTC m=+0.057361081 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Feb 19 20:46:17 compute-0 podman[257666]: 2026-02-19 20:46:17.371108263 +0000 UTC m=+0.063065866 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, io.openshift.expose-services=, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, config_id=kepler, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, version=9.4, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 19 20:46:19 compute-0 nova_compute[188777]: 2026-02-19 20:46:19.086 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:46:20 compute-0 nova_compute[188777]: 2026-02-19 20:46:20.945 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:46:21 compute-0 podman[257706]: 2026-02-19 20:46:21.369590896 +0000 UTC m=+0.052277664 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 19 20:46:23 compute-0 podman[257730]: 2026-02-19 20:46:23.370852584 +0000 UTC m=+0.060955065 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Feb 19 20:46:24 compute-0 nova_compute[188777]: 2026-02-19 20:46:24.089 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:46:25 compute-0 podman[257751]: 2026-02-19 20:46:25.389595688 +0000 UTC m=+0.081630431 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 19 20:46:25 compute-0 nova_compute[188777]: 2026-02-19 20:46:25.948 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:46:26 compute-0 sshd-session[257749]: Invalid user daniel from 154.12.80.151 port 55892
Feb 19 20:46:26 compute-0 sshd-session[257749]: Received disconnect from 154.12.80.151 port 55892:11: Bye Bye [preauth]
Feb 19 20:46:26 compute-0 sshd-session[257749]: Disconnected from invalid user daniel 154.12.80.151 port 55892 [preauth]
Feb 19 20:46:29 compute-0 nova_compute[188777]: 2026-02-19 20:46:29.090 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:46:29 compute-0 podman[204724]: time="2026-02-19T20:46:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:46:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:46:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31705 "" "Go-http-client/1.1"
Feb 19 20:46:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:46:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5315 "" "Go-http-client/1.1"
Feb 19 20:46:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:46:30.466 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:46:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:46:30.466 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:46:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:46:30.467 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:46:30 compute-0 nova_compute[188777]: 2026-02-19 20:46:30.952 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:46:31 compute-0 openstack_network_exporter[207898]: ERROR   20:46:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:46:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:46:31 compute-0 openstack_network_exporter[207898]: ERROR   20:46:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:46:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:46:34 compute-0 nova_compute[188777]: 2026-02-19 20:46:34.092 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:46:35 compute-0 nova_compute[188777]: 2026-02-19 20:46:35.956 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:46:39 compute-0 nova_compute[188777]: 2026-02-19 20:46:39.095 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:46:40 compute-0 nova_compute[188777]: 2026-02-19 20:46:40.960 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:46:41 compute-0 podman[257778]: 2026-02-19 20:46:41.384392013 +0000 UTC m=+0.061159142 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 19 20:46:41 compute-0 podman[257777]: 2026-02-19 20:46:41.391948329 +0000 UTC m=+0.069004677 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.buildah.version=1.33.7, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, name=ubi9/ubi-minimal, version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, release=1770267347, architecture=x86_64, config_id=openstack_network_exporter)
Feb 19 20:46:43 compute-0 podman[257820]: 2026-02-19 20:46:43.392224636 +0000 UTC m=+0.073808167 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:46:44 compute-0 nova_compute[188777]: 2026-02-19 20:46:44.098 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:46:45 compute-0 nova_compute[188777]: 2026-02-19 20:46:45.964 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:46:48 compute-0 podman[257838]: 2026-02-19 20:46:48.424901241 +0000 UTC m=+0.087894707 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:46:48 compute-0 podman[257837]: 2026-02-19 20:46:48.450849982 +0000 UTC m=+0.117032297 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, managed_by=edpm_ansible, release=1214.1726694543, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, distribution-scope=public, io.buildah.version=1.29.0, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler)
Feb 19 20:46:49 compute-0 nova_compute[188777]: 2026-02-19 20:46:49.100 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:46:50 compute-0 nova_compute[188777]: 2026-02-19 20:46:50.969 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:46:52 compute-0 podman[257879]: 2026-02-19 20:46:52.382010988 +0000 UTC m=+0.073566179 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 19 20:46:54 compute-0 nova_compute[188777]: 2026-02-19 20:46:54.101 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:46:54 compute-0 podman[257902]: 2026-02-19 20:46:54.40897479 +0000 UTC m=+0.085118649 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Feb 19 20:46:55 compute-0 nova_compute[188777]: 2026-02-19 20:46:55.973 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:46:56 compute-0 podman[257922]: 2026-02-19 20:46:56.416471943 +0000 UTC m=+0.090344013 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 19 20:46:59 compute-0 nova_compute[188777]: 2026-02-19 20:46:59.103 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:46:59 compute-0 podman[204724]: time="2026-02-19T20:46:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:46:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:46:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31705 "" "Go-http-client/1.1"
Feb 19 20:46:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:46:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5311 "" "Go-http-client/1.1"
Feb 19 20:47:00 compute-0 nova_compute[188777]: 2026-02-19 20:47:00.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:47:00 compute-0 nova_compute[188777]: 2026-02-19 20:47:00.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:47:00 compute-0 nova_compute[188777]: 2026-02-19 20:47:00.265 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:47:00 compute-0 nova_compute[188777]: 2026-02-19 20:47:00.975 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:47:01 compute-0 openstack_network_exporter[207898]: ERROR   20:47:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:47:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:47:01 compute-0 openstack_network_exporter[207898]: ERROR   20:47:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:47:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:47:03 compute-0 nova_compute[188777]: 2026-02-19 20:47:03.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:47:03 compute-0 nova_compute[188777]: 2026-02-19 20:47:03.265 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:47:03 compute-0 nova_compute[188777]: 2026-02-19 20:47:03.836 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "refresh_cache-1b6b1397-fda7-4470-883b-1cc5974fac84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:47:03 compute-0 nova_compute[188777]: 2026-02-19 20:47:03.837 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquired lock "refresh_cache-1b6b1397-fda7-4470-883b-1cc5974fac84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:47:03 compute-0 nova_compute[188777]: 2026-02-19 20:47:03.838 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 19 20:47:04 compute-0 nova_compute[188777]: 2026-02-19 20:47:04.105 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:47:05 compute-0 nova_compute[188777]: 2026-02-19 20:47:05.979 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:47:06 compute-0 nova_compute[188777]: 2026-02-19 20:47:06.584 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Updating instance_info_cache with network_info: [{"id": "3b9e0369-31ef-4446-b291-70f0cbddeb63", "address": "fa:16:3e:56:ea:b9", "network": {"id": "03b0387c-cb4d-416d-b212-4d980b66cbe2", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e54c3b3dadc42fca16da4cb7212a2db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b9e0369-31", "ovs_interfaceid": "3b9e0369-31ef-4446-b291-70f0cbddeb63", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:47:06 compute-0 nova_compute[188777]: 2026-02-19 20:47:06.606 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Releasing lock "refresh_cache-1b6b1397-fda7-4470-883b-1cc5974fac84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:47:06 compute-0 nova_compute[188777]: 2026-02-19 20:47:06.606 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 19 20:47:06 compute-0 nova_compute[188777]: 2026-02-19 20:47:06.606 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:47:07 compute-0 nova_compute[188777]: 2026-02-19 20:47:07.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:47:07 compute-0 nova_compute[188777]: 2026-02-19 20:47:07.620 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:47:07 compute-0 nova_compute[188777]: 2026-02-19 20:47:07.621 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:47:07 compute-0 nova_compute[188777]: 2026-02-19 20:47:07.621 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:47:07 compute-0 nova_compute[188777]: 2026-02-19 20:47:07.622 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:47:07 compute-0 sshd-session[257950]: Invalid user newuser from 103.103.245.7 port 55724
Feb 19 20:47:07 compute-0 nova_compute[188777]: 2026-02-19 20:47:07.856 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:47:07 compute-0 nova_compute[188777]: 2026-02-19 20:47:07.910 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:47:07 compute-0 nova_compute[188777]: 2026-02-19 20:47:07.912 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:47:08 compute-0 nova_compute[188777]: 2026-02-19 20:47:08.004 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:47:08 compute-0 nova_compute[188777]: 2026-02-19 20:47:08.013 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:47:08 compute-0 nova_compute[188777]: 2026-02-19 20:47:08.071 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:47:08 compute-0 nova_compute[188777]: 2026-02-19 20:47:08.072 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:47:08 compute-0 nova_compute[188777]: 2026-02-19 20:47:08.124 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:47:08 compute-0 nova_compute[188777]: 2026-02-19 20:47:08.132 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:47:08 compute-0 nova_compute[188777]: 2026-02-19 20:47:08.190 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:47:08 compute-0 nova_compute[188777]: 2026-02-19 20:47:08.193 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:47:08 compute-0 nova_compute[188777]: 2026-02-19 20:47:08.241 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:47:08 compute-0 nova_compute[188777]: 2026-02-19 20:47:08.250 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:47:08 compute-0 nova_compute[188777]: 2026-02-19 20:47:08.300 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:47:08 compute-0 nova_compute[188777]: 2026-02-19 20:47:08.301 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:47:08 compute-0 nova_compute[188777]: 2026-02-19 20:47:08.367 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:47:08 compute-0 nova_compute[188777]: 2026-02-19 20:47:08.737 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:47:08 compute-0 nova_compute[188777]: 2026-02-19 20:47:08.738 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4611MB free_disk=72.05464553833008GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:47:08 compute-0 nova_compute[188777]: 2026-02-19 20:47:08.739 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:47:08 compute-0 nova_compute[188777]: 2026-02-19 20:47:08.739 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:47:08 compute-0 nova_compute[188777]: 2026-02-19 20:47:08.837 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 997ebdcf-7eab-485b-8fbf-d21112c78946 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:47:08 compute-0 nova_compute[188777]: 2026-02-19 20:47:08.838 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance dff9d513-54f8-4d73-acf7-df610dc4d064 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:47:08 compute-0 nova_compute[188777]: 2026-02-19 20:47:08.838 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 1b6b1397-fda7-4470-883b-1cc5974fac84 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:47:08 compute-0 nova_compute[188777]: 2026-02-19 20:47:08.839 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance c7d04a5a-1e2f-40c2-a686-18b23a5bddfa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:47:08 compute-0 nova_compute[188777]: 2026-02-19 20:47:08.839 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:47:08 compute-0 nova_compute[188777]: 2026-02-19 20:47:08.840 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:47:08 compute-0 nova_compute[188777]: 2026-02-19 20:47:08.940 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:47:08 compute-0 nova_compute[188777]: 2026-02-19 20:47:08.953 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:47:08 compute-0 nova_compute[188777]: 2026-02-19 20:47:08.955 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:47:08 compute-0 nova_compute[188777]: 2026-02-19 20:47:08.955 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.216s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:47:09 compute-0 nova_compute[188777]: 2026-02-19 20:47:09.108 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:47:09 compute-0 sshd-session[257950]: Received disconnect from 103.103.245.7 port 55724:11: Bye Bye [preauth]
Feb 19 20:47:09 compute-0 sshd-session[257950]: Disconnected from invalid user newuser 103.103.245.7 port 55724 [preauth]
Feb 19 20:47:09 compute-0 nova_compute[188777]: 2026-02-19 20:47:09.951 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:47:10 compute-0 nova_compute[188777]: 2026-02-19 20:47:10.289 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:47:10 compute-0 nova_compute[188777]: 2026-02-19 20:47:10.983 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:47:11 compute-0 nova_compute[188777]: 2026-02-19 20:47:11.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:47:11 compute-0 nova_compute[188777]: 2026-02-19 20:47:11.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:47:12 compute-0 nova_compute[188777]: 2026-02-19 20:47:12.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:47:12 compute-0 podman[257976]: 2026-02-19 20:47:12.400940254 +0000 UTC m=+0.068047887 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.buildah.version=1.33.7, architecture=x86_64, io.openshift.tags=minimal rhel9, distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, release=1770267347, version=9.7)
Feb 19 20:47:12 compute-0 podman[257977]: 2026-02-19 20:47:12.416009894 +0000 UTC m=+0.081680962 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 19 20:47:14 compute-0 nova_compute[188777]: 2026-02-19 20:47:14.110 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:47:14 compute-0 podman[258016]: 2026-02-19 20:47:14.378027337 +0000 UTC m=+0.062346819 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.155 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.155 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.155 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.156 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fa4f6728800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.157 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.157 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.157 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.158 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.158 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.158 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.158 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.159 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.159 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.159 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a390>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.159 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.159 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.160 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.160 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.160 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.161 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.161 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.161 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.162 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.163 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.163 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.163 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.163 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.163 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.163 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f525f620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.165 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'c7d04a5a-1e2f-40c2-a686-18b23a5bddfa', 'name': 'te-4749372-asg-gqiuwwiovj7t-a22kewlbuwbg-ig3ypn6zxo3u', 'flavor': {'id': '68c4e072-7c2b-48a1-8e07-0fd69e153270', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'e98a7b34-d7ef-4dcd-b1f3-0a369d480f18'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000d', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3e54c3b3dadc42fca16da4cb7212a2db', 'user_id': '4495bf20aedd42ff97fdae62ef729522', 'hostId': '22c8c0ddb7108a2907037af7b4f06c9d19e2238520664206bd96d609', 'status': 'active', 'metadata': {'metering.server_group': '08c5967c-a408-49e3-be73-425b7dd8ee8c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.170 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '997ebdcf-7eab-485b-8fbf-d21112c78946', 'name': 'tempest-AttachInterfacesUnderV243Test-server-684728485', 'flavor': {'id': '68c4e072-7c2b-48a1-8e07-0fd69e153270', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '17b9bce8-a91b-495d-ac33-cf63893413f9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000009', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '54ce0de2bf12421a9458013ccaa2dcad', 'user_id': '90c9e30d17534357bece36d1acaab39c', 'hostId': 'f46cf9989db3abf7517c94fba8fc996a8b55c81d8ccd61b23f3020bd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.176 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'dff9d513-54f8-4d73-acf7-df610dc4d064', 'name': 'tempest-TestNetworkBasicOps-server-215985627', 'flavor': {'id': '68c4e072-7c2b-48a1-8e07-0fd69e153270', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '17b9bce8-a91b-495d-ac33-cf63893413f9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'eb9e3732b9f4456d9f90bf3e156f6f7c', 'user_id': 'ef20d0162e404953a8f45beac9fadf18', 'hostId': 'f5b284f60221ec4908d310f9d0c4e0647a5dcc4e862839352782ffc8', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.180 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1b6b1397-fda7-4470-883b-1cc5974fac84', 'name': 'te-4749372-asg-gqiuwwiovj7t-inxwtqyxfrgl-i7ynim6swjio', 'flavor': {'id': '68c4e072-7c2b-48a1-8e07-0fd69e153270', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'e98a7b34-d7ef-4dcd-b1f3-0a369d480f18'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000c', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3e54c3b3dadc42fca16da4cb7212a2db', 'user_id': '4495bf20aedd42ff97fdae62ef729522', 'hostId': '22c8c0ddb7108a2907037af7b4f06c9d19e2238520664206bd96d609', 'status': 'active', 'metadata': {'metering.server_group': '08c5967c-a408-49e3-be73-425b7dd8ee8c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.181 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.181 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.182 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.182 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.183 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-19T20:47:15.182434) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.187 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.191 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.195 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.197 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.198 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.198 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fa4f672a480>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.198 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.198 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fa4f672a180>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.199 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.199 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.199 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.199 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.199 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.199 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.outgoing.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.199 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-19T20:47:15.199339) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.200 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.outgoing.packets volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.200 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.200 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.201 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fa4f672bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.201 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.201 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.201 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.201 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.201 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.202 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.202 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.202 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-19T20:47:15.201496) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.202 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.incoming.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.203 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.203 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fa4f672a270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.203 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.203 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.203 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.204 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.204 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.204 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-19T20:47:15.203919) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.204 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.outgoing.bytes volume: 3390 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.204 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.outgoing.bytes volume: 15886 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.205 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.205 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.205 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fa4f6728ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.205 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.205 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.206 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.206 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.207 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-19T20:47:15.206121) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.225 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.244 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.261 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.279 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.280 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.280 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fa4f672a300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.281 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.281 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.281 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.281 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.281 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.282 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.282 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.282 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.283 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.283 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fa4f672ab70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.284 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.284 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.284 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.284 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.285 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-19T20:47:15.281521) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.285 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-19T20:47:15.284399) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.297 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.299 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.312 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.312 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.324 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.324 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.334 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.334 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.335 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.335 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fa4f6728290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.335 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.335 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.335 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.336 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.336 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-19T20:47:15.335970) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.365 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.read.bytes volume: 30808576 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.366 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.402 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.bytes volume: 30759424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.403 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.438 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.bytes volume: 30591488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.439 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.bytes volume: 274750 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.468 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.read.bytes volume: 31070720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.468 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.468 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.468 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fa4f69216a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.469 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.469 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.469 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.469 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.469 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/cpu volume: 332220000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.470 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/cpu volume: 39370000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.470 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-19T20:47:15.469568) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.470 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/cpu volume: 39160000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.470 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/cpu volume: 335080000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.471 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.471 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fa4f67286b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.471 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.471 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fa4f67283b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.471 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.471 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.471 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.471 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.472 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.read.latency volume: 903986787 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.472 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.read.latency volume: 180241195 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.472 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.latency volume: 893810108 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.472 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-19T20:47:15.471843) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.472 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.latency volume: 72441655 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.472 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.latency volume: 1154094577 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.473 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.latency volume: 68730024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.473 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.read.latency volume: 916964403 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.473 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.read.latency volume: 88997503 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.474 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.474 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fa4f672a120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.474 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.474 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.474 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.474 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.474 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.474 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-19T20:47:15.474482) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.474 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.475 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.475 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.475 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.475 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fa4f672a1b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.475 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.475 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.475 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.476 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.476 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.476 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-19T20:47:15.476049) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.477 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.477 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.477 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.478 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.478 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fa4f6728410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.478 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.479 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.479 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.479 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.479 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.read.requests volume: 1111 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.479 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-19T20:47:15.479347) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.480 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.480 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.requests volume: 1111 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.481 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.481 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.requests volume: 1098 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.481 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.requests volume: 108 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.482 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.read.requests volume: 1136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.482 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.483 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.483 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fa4f672a150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.484 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.484 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.484 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.484 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.485 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.485 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-19T20:47:15.484873) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.485 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.486 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.incoming.packets volume: 115 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.486 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.487 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.487 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fa4f6728470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.487 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.487 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.488 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.488 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.488 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.489 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.489 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.489 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.490 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.490 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-19T20:47:15.488237) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.490 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.491 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.usage volume: 30081024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.491 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.493 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.493 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fa4f68f6030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.494 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.494 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.494 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.494 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.495 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-19T20:47:15.494785) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.495 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.write.bytes volume: 73179136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.495 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.496 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.bytes volume: 73101312 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.496 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.496 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.bytes volume: 73109504 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.496 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.497 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.write.bytes volume: 73191424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.497 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.497 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.498 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fa4f672ab10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.498 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.498 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.498 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.498 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.499 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.499 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-19T20:47:15.498590) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.499 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.499 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.499 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.500 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.500 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.500 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.allocation volume: 31006720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.500 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.501 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.501 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fa4f6728500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.501 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.501 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.501 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.501 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.502 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.write.latency volume: 3745613470 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.502 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-19T20:47:15.501711) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.502 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.502 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.latency volume: 3085349853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.502 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.502 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.latency volume: 15219964748 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.503 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.503 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.write.latency volume: 3725974324 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.503 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.503 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.503 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fa4f672a0c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.504 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.504 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.504 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.504 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.504 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-19T20:47:15.504314) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.504 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.505 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.505 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.505 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.505 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.505 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fa4f6728560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.506 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.506 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.506 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.506 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.506 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.write.requests volume: 349 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.506 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-19T20:47:15.506255) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.506 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.507 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.requests volume: 328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.507 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.507 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.requests volume: 306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.507 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.507 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.write.requests volume: 309 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.508 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.508 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.508 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fa4f67285c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.508 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.508 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.508 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.509 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.509 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-19T20:47:15.509012) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.510 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.510 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fa4f6728620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.510 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.510 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.510 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.510 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.510 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-19T20:47:15.510535) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.511 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.511 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fa4f672be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.511 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.511 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.511 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.512 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.512 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/memory.usage volume: 46.48828125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.512 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-19T20:47:15.511893) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.512 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/memory.usage volume: 42.71875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.512 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/memory.usage volume: 42.80859375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.513 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/memory.usage volume: 42.234375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.513 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.513 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fa4f672be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.513 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.513 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.513 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.513 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.514 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.514 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-19T20:47:15.513817) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.514 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.incoming.bytes volume: 4343 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.514 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.incoming.bytes volume: 20170 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.515 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.incoming.bytes volume: 2150 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.515 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.515 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.515 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.515 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.516 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.516 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.516 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.516 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.516 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.516 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.516 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.516 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.516 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.516 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.516 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.516 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.516 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.516 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.516 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.516 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.516 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.517 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.517 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.517 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.517 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.517 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:47:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:47:15.517 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:47:15 compute-0 nova_compute[188777]: 2026-02-19 20:47:15.987 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:47:19 compute-0 nova_compute[188777]: 2026-02-19 20:47:19.112 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:47:19 compute-0 podman[258037]: 2026-02-19 20:47:19.392654378 +0000 UTC m=+0.068589283 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi)
Feb 19 20:47:19 compute-0 podman[258036]: 2026-02-19 20:47:19.400981589 +0000 UTC m=+0.073503987 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1214.1726694543, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, io.openshift.tags=base rhel9, config_id=kepler, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=)
Feb 19 20:47:20 compute-0 nova_compute[188777]: 2026-02-19 20:47:20.989 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:47:23 compute-0 podman[258075]: 2026-02-19 20:47:23.368256803 +0000 UTC m=+0.053209074 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 19 20:47:24 compute-0 nova_compute[188777]: 2026-02-19 20:47:24.118 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:47:25 compute-0 podman[258098]: 2026-02-19 20:47:25.41690617 +0000 UTC m=+0.078117191 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Feb 19 20:47:25 compute-0 nova_compute[188777]: 2026-02-19 20:47:25.994 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:47:27 compute-0 podman[258118]: 2026-02-19 20:47:27.426483978 +0000 UTC m=+0.106203608 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.license=GPLv2)
Feb 19 20:47:29 compute-0 nova_compute[188777]: 2026-02-19 20:47:29.123 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:47:29 compute-0 podman[204724]: time="2026-02-19T20:47:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:47:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:47:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31705 "" "Go-http-client/1.1"
Feb 19 20:47:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:47:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5319 "" "Go-http-client/1.1"
Feb 19 20:47:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:47:30.467 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:47:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:47:30.468 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:47:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:47:30.471 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:47:30 compute-0 nova_compute[188777]: 2026-02-19 20:47:30.997 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:47:31 compute-0 openstack_network_exporter[207898]: ERROR   20:47:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:47:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:47:31 compute-0 openstack_network_exporter[207898]: ERROR   20:47:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:47:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:47:34 compute-0 nova_compute[188777]: 2026-02-19 20:47:34.126 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:47:36 compute-0 nova_compute[188777]: 2026-02-19 20:47:36.001 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:47:39 compute-0 nova_compute[188777]: 2026-02-19 20:47:39.129 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:47:41 compute-0 nova_compute[188777]: 2026-02-19 20:47:41.004 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:47:43 compute-0 podman[258144]: 2026-02-19 20:47:43.400896617 +0000 UTC m=+0.084942414 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, build-date=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, distribution-scope=public, managed_by=edpm_ansible, name=ubi9/ubi-minimal, release=1770267347, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers)
Feb 19 20:47:43 compute-0 podman[258145]: 2026-02-19 20:47:43.437668176 +0000 UTC m=+0.107019984 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 19 20:47:44 compute-0 nova_compute[188777]: 2026-02-19 20:47:44.130 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:47:44 compute-0 podman[258186]: 2026-02-19 20:47:44.766535629 +0000 UTC m=+0.095976170 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 20:47:46 compute-0 nova_compute[188777]: 2026-02-19 20:47:46.007 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:47:49 compute-0 nova_compute[188777]: 2026-02-19 20:47:49.133 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:47:50 compute-0 podman[258208]: 2026-02-19 20:47:50.379149221 +0000 UTC m=+0.063124632 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 19 20:47:50 compute-0 podman[258207]: 2026-02-19 20:47:50.410283504 +0000 UTC m=+0.095008999 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, version=9.4, name=ubi9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc.)
Feb 19 20:47:51 compute-0 nova_compute[188777]: 2026-02-19 20:47:51.013 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:47:54 compute-0 nova_compute[188777]: 2026-02-19 20:47:54.136 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:47:54 compute-0 podman[258247]: 2026-02-19 20:47:54.285824322 +0000 UTC m=+0.106362824 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 19 20:47:56 compute-0 nova_compute[188777]: 2026-02-19 20:47:56.017 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:47:56 compute-0 podman[258273]: 2026-02-19 20:47:56.42026107 +0000 UTC m=+0.103349070 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, managed_by=edpm_ansible)
Feb 19 20:47:58 compute-0 podman[258293]: 2026-02-19 20:47:58.440005936 +0000 UTC m=+0.127609998 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3)
Feb 19 20:47:59 compute-0 nova_compute[188777]: 2026-02-19 20:47:59.137 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:47:59 compute-0 podman[204724]: time="2026-02-19T20:47:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:47:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:47:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31705 "" "Go-http-client/1.1"
Feb 19 20:47:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:47:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5312 "" "Go-http-client/1.1"
Feb 19 20:48:01 compute-0 nova_compute[188777]: 2026-02-19 20:48:01.020 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:48:01 compute-0 openstack_network_exporter[207898]: ERROR   20:48:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:48:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:48:01 compute-0 openstack_network_exporter[207898]: ERROR   20:48:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:48:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:48:02 compute-0 nova_compute[188777]: 2026-02-19 20:48:02.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:48:02 compute-0 nova_compute[188777]: 2026-02-19 20:48:02.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:48:02 compute-0 nova_compute[188777]: 2026-02-19 20:48:02.265 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:48:04 compute-0 nova_compute[188777]: 2026-02-19 20:48:04.140 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:48:04 compute-0 nova_compute[188777]: 2026-02-19 20:48:04.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:48:04 compute-0 nova_compute[188777]: 2026-02-19 20:48:04.266 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:48:04 compute-0 nova_compute[188777]: 2026-02-19 20:48:04.940 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "refresh_cache-c7d04a5a-1e2f-40c2-a686-18b23a5bddfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:48:04 compute-0 nova_compute[188777]: 2026-02-19 20:48:04.941 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquired lock "refresh_cache-c7d04a5a-1e2f-40c2-a686-18b23a5bddfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:48:04 compute-0 nova_compute[188777]: 2026-02-19 20:48:04.941 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 19 20:48:06 compute-0 nova_compute[188777]: 2026-02-19 20:48:06.025 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:48:06 compute-0 nova_compute[188777]: 2026-02-19 20:48:06.354 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Updating instance_info_cache with network_info: [{"id": "6730c115-fc6d-4fab-9c7d-1f6f4bd9e878", "address": "fa:16:3e:b9:4e:00", "network": {"id": "03b0387c-cb4d-416d-b212-4d980b66cbe2", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.124", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e54c3b3dadc42fca16da4cb7212a2db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6730c115-fc", "ovs_interfaceid": "6730c115-fc6d-4fab-9c7d-1f6f4bd9e878", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:48:06 compute-0 nova_compute[188777]: 2026-02-19 20:48:06.369 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Releasing lock "refresh_cache-c7d04a5a-1e2f-40c2-a686-18b23a5bddfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:48:06 compute-0 nova_compute[188777]: 2026-02-19 20:48:06.369 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 19 20:48:06 compute-0 nova_compute[188777]: 2026-02-19 20:48:06.370 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:48:09 compute-0 nova_compute[188777]: 2026-02-19 20:48:09.142 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:48:09 compute-0 nova_compute[188777]: 2026-02-19 20:48:09.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:48:09 compute-0 nova_compute[188777]: 2026-02-19 20:48:09.287 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:48:09 compute-0 nova_compute[188777]: 2026-02-19 20:48:09.289 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:48:09 compute-0 nova_compute[188777]: 2026-02-19 20:48:09.289 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:48:09 compute-0 nova_compute[188777]: 2026-02-19 20:48:09.290 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:48:09 compute-0 nova_compute[188777]: 2026-02-19 20:48:09.375 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:48:09 compute-0 nova_compute[188777]: 2026-02-19 20:48:09.459 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:48:09 compute-0 nova_compute[188777]: 2026-02-19 20:48:09.461 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:48:09 compute-0 nova_compute[188777]: 2026-02-19 20:48:09.514 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:48:09 compute-0 nova_compute[188777]: 2026-02-19 20:48:09.524 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:48:09 compute-0 nova_compute[188777]: 2026-02-19 20:48:09.578 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:48:09 compute-0 nova_compute[188777]: 2026-02-19 20:48:09.580 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:48:09 compute-0 nova_compute[188777]: 2026-02-19 20:48:09.632 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:48:09 compute-0 nova_compute[188777]: 2026-02-19 20:48:09.640 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:48:09 compute-0 nova_compute[188777]: 2026-02-19 20:48:09.690 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:48:09 compute-0 nova_compute[188777]: 2026-02-19 20:48:09.690 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:48:09 compute-0 nova_compute[188777]: 2026-02-19 20:48:09.762 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:48:09 compute-0 nova_compute[188777]: 2026-02-19 20:48:09.770 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:48:09 compute-0 nova_compute[188777]: 2026-02-19 20:48:09.825 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:48:09 compute-0 nova_compute[188777]: 2026-02-19 20:48:09.826 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:48:09 compute-0 nova_compute[188777]: 2026-02-19 20:48:09.897 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:48:10 compute-0 nova_compute[188777]: 2026-02-19 20:48:10.325 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:48:10 compute-0 nova_compute[188777]: 2026-02-19 20:48:10.326 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4601MB free_disk=72.05469131469727GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:48:10 compute-0 nova_compute[188777]: 2026-02-19 20:48:10.327 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:48:10 compute-0 nova_compute[188777]: 2026-02-19 20:48:10.327 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:48:10 compute-0 nova_compute[188777]: 2026-02-19 20:48:10.432 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 997ebdcf-7eab-485b-8fbf-d21112c78946 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:48:10 compute-0 nova_compute[188777]: 2026-02-19 20:48:10.433 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance dff9d513-54f8-4d73-acf7-df610dc4d064 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:48:10 compute-0 nova_compute[188777]: 2026-02-19 20:48:10.433 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 1b6b1397-fda7-4470-883b-1cc5974fac84 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:48:10 compute-0 nova_compute[188777]: 2026-02-19 20:48:10.434 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance c7d04a5a-1e2f-40c2-a686-18b23a5bddfa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:48:10 compute-0 nova_compute[188777]: 2026-02-19 20:48:10.434 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:48:10 compute-0 nova_compute[188777]: 2026-02-19 20:48:10.435 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:48:10 compute-0 nova_compute[188777]: 2026-02-19 20:48:10.555 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:48:10 compute-0 nova_compute[188777]: 2026-02-19 20:48:10.573 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:48:10 compute-0 nova_compute[188777]: 2026-02-19 20:48:10.575 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:48:10 compute-0 nova_compute[188777]: 2026-02-19 20:48:10.576 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.249s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:48:11 compute-0 nova_compute[188777]: 2026-02-19 20:48:11.030 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:48:11 compute-0 nova_compute[188777]: 2026-02-19 20:48:11.573 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:48:11 compute-0 nova_compute[188777]: 2026-02-19 20:48:11.574 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:48:12 compute-0 nova_compute[188777]: 2026-02-19 20:48:12.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:48:14 compute-0 nova_compute[188777]: 2026-02-19 20:48:14.144 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:48:14 compute-0 nova_compute[188777]: 2026-02-19 20:48:14.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:48:14 compute-0 podman[258343]: 2026-02-19 20:48:14.403342388 +0000 UTC m=+0.075225821 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9/ubi-minimal, release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, version=9.7, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., distribution-scope=public, config_id=openstack_network_exporter, vcs-type=git, build-date=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., container_name=openstack_network_exporter, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 19 20:48:14 compute-0 podman[258344]: 2026-02-19 20:48:14.417961524 +0000 UTC m=+0.091243481 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 19 20:48:15 compute-0 podman[258386]: 2026-02-19 20:48:15.418959815 +0000 UTC m=+0.096942549 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb 19 20:48:16 compute-0 nova_compute[188777]: 2026-02-19 20:48:16.035 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:48:19 compute-0 nova_compute[188777]: 2026-02-19 20:48:19.145 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:48:21 compute-0 nova_compute[188777]: 2026-02-19 20:48:21.039 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:48:21 compute-0 podman[258406]: 2026-02-19 20:48:21.423876532 +0000 UTC m=+0.093626286 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.expose-services=, maintainer=Red Hat, Inc., config_id=kepler, managed_by=edpm_ansible, com.redhat.component=ubi9-container, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, architecture=x86_64, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, version=9.4, release-0.7.12=, vendor=Red Hat, Inc., distribution-scope=public)
Feb 19 20:48:21 compute-0 podman[258407]: 2026-02-19 20:48:21.436235369 +0000 UTC m=+0.106978563 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Feb 19 20:48:24 compute-0 nova_compute[188777]: 2026-02-19 20:48:24.153 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:48:25 compute-0 podman[258444]: 2026-02-19 20:48:25.416515188 +0000 UTC m=+0.101459140 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 19 20:48:26 compute-0 nova_compute[188777]: 2026-02-19 20:48:26.044 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:48:27 compute-0 podman[258467]: 2026-02-19 20:48:27.425292811 +0000 UTC m=+0.104380001 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:48:29 compute-0 nova_compute[188777]: 2026-02-19 20:48:29.156 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:48:29 compute-0 podman[258489]: 2026-02-19 20:48:29.441938541 +0000 UTC m=+0.121996203 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller)
Feb 19 20:48:29 compute-0 podman[204724]: time="2026-02-19T20:48:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:48:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:48:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31705 "" "Go-http-client/1.1"
Feb 19 20:48:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:48:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5312 "" "Go-http-client/1.1"
Feb 19 20:48:30 compute-0 sshd-session[258487]: Invalid user amalia from 103.179.56.24 port 60242
Feb 19 20:48:30 compute-0 sshd-session[258487]: Received disconnect from 103.179.56.24 port 60242:11: Bye Bye [preauth]
Feb 19 20:48:30 compute-0 sshd-session[258487]: Disconnected from invalid user amalia 103.179.56.24 port 60242 [preauth]
Feb 19 20:48:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:48:30.469 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:48:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:48:30.470 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:48:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:48:30.471 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:48:31 compute-0 nova_compute[188777]: 2026-02-19 20:48:31.049 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:48:31 compute-0 openstack_network_exporter[207898]: ERROR   20:48:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:48:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:48:31 compute-0 openstack_network_exporter[207898]: ERROR   20:48:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:48:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:48:34 compute-0 nova_compute[188777]: 2026-02-19 20:48:34.159 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:48:36 compute-0 nova_compute[188777]: 2026-02-19 20:48:36.053 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:48:39 compute-0 nova_compute[188777]: 2026-02-19 20:48:39.161 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:48:41 compute-0 nova_compute[188777]: 2026-02-19 20:48:41.056 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:48:44 compute-0 nova_compute[188777]: 2026-02-19 20:48:44.162 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:48:44 compute-0 podman[258515]: 2026-02-19 20:48:44.74449267 +0000 UTC m=+0.074459618 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, version=9.7, build-date=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1770267347, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.expose-services=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible)
Feb 19 20:48:44 compute-0 podman[258516]: 2026-02-19 20:48:44.750582051 +0000 UTC m=+0.079778044 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 19 20:48:46 compute-0 nova_compute[188777]: 2026-02-19 20:48:46.060 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:48:46 compute-0 podman[258557]: 2026-02-19 20:48:46.424403019 +0000 UTC m=+0.107137918 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 19 20:48:49 compute-0 nova_compute[188777]: 2026-02-19 20:48:49.164 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:48:51 compute-0 nova_compute[188777]: 2026-02-19 20:48:51.065 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:48:51 compute-0 sshd-session[258575]: Received disconnect from 103.119.94.10 port 52468:11: Bye Bye [preauth]
Feb 19 20:48:51 compute-0 sshd-session[258575]: Disconnected from authenticating user root 103.119.94.10 port 52468 [preauth]
Feb 19 20:48:52 compute-0 podman[258578]: 2026-02-19 20:48:52.386098227 +0000 UTC m=+0.064801586 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb 19 20:48:52 compute-0 podman[258577]: 2026-02-19 20:48:52.408543958 +0000 UTC m=+0.092746509 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.openshift.tags=base rhel9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.buildah.version=1.29.0, io.openshift.expose-services=, architecture=x86_64, release=1214.1726694543, version=9.4)
Feb 19 20:48:54 compute-0 nova_compute[188777]: 2026-02-19 20:48:54.166 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:48:56 compute-0 nova_compute[188777]: 2026-02-19 20:48:56.070 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:48:56 compute-0 podman[258615]: 2026-02-19 20:48:56.369506374 +0000 UTC m=+0.055801794 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 19 20:48:58 compute-0 podman[258640]: 2026-02-19 20:48:58.409782311 +0000 UTC m=+0.099094247 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, io.buildah.version=1.43.0)
Feb 19 20:48:59 compute-0 nova_compute[188777]: 2026-02-19 20:48:59.169 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:48:59 compute-0 podman[204724]: time="2026-02-19T20:48:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:48:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:48:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31705 "" "Go-http-client/1.1"
Feb 19 20:48:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:48:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5318 "" "Go-http-client/1.1"
Feb 19 20:49:00 compute-0 podman[258660]: 2026-02-19 20:49:00.416295033 +0000 UTC m=+0.090023123 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 19 20:49:01 compute-0 nova_compute[188777]: 2026-02-19 20:49:01.072 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:49:01 compute-0 openstack_network_exporter[207898]: ERROR   20:49:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:49:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:49:01 compute-0 openstack_network_exporter[207898]: ERROR   20:49:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:49:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:49:02 compute-0 nova_compute[188777]: 2026-02-19 20:49:02.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:49:04 compute-0 nova_compute[188777]: 2026-02-19 20:49:04.172 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:49:04 compute-0 nova_compute[188777]: 2026-02-19 20:49:04.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:49:04 compute-0 nova_compute[188777]: 2026-02-19 20:49:04.265 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:49:04 compute-0 nova_compute[188777]: 2026-02-19 20:49:04.266 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 19 20:49:05 compute-0 nova_compute[188777]: 2026-02-19 20:49:05.001 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "refresh_cache-997ebdcf-7eab-485b-8fbf-d21112c78946" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:49:05 compute-0 nova_compute[188777]: 2026-02-19 20:49:05.002 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquired lock "refresh_cache-997ebdcf-7eab-485b-8fbf-d21112c78946" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:49:05 compute-0 nova_compute[188777]: 2026-02-19 20:49:05.002 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 19 20:49:05 compute-0 nova_compute[188777]: 2026-02-19 20:49:05.002 188781 DEBUG nova.objects.instance [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 997ebdcf-7eab-485b-8fbf-d21112c78946 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:49:06 compute-0 nova_compute[188777]: 2026-02-19 20:49:06.075 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:49:06 compute-0 nova_compute[188777]: 2026-02-19 20:49:06.506 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Updating instance_info_cache with network_info: [{"id": "44b4451c-db39-42a3-a2c6-5c8c42d1669b", "address": "fa:16:3e:f7:60:ee", "network": {"id": "ef3fe901-c03c-42fd-97b9-c1f0218f248b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-572210270-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54ce0de2bf12421a9458013ccaa2dcad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44b4451c-db", "ovs_interfaceid": "44b4451c-db39-42a3-a2c6-5c8c42d1669b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:49:06 compute-0 nova_compute[188777]: 2026-02-19 20:49:06.521 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Releasing lock "refresh_cache-997ebdcf-7eab-485b-8fbf-d21112c78946" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:49:06 compute-0 nova_compute[188777]: 2026-02-19 20:49:06.522 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 19 20:49:06 compute-0 nova_compute[188777]: 2026-02-19 20:49:06.523 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:49:06 compute-0 nova_compute[188777]: 2026-02-19 20:49:06.524 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:49:06 compute-0 nova_compute[188777]: 2026-02-19 20:49:06.524 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:49:09 compute-0 nova_compute[188777]: 2026-02-19 20:49:09.174 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:49:09 compute-0 nova_compute[188777]: 2026-02-19 20:49:09.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:49:09 compute-0 nova_compute[188777]: 2026-02-19 20:49:09.287 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:49:09 compute-0 nova_compute[188777]: 2026-02-19 20:49:09.307 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:49:09 compute-0 nova_compute[188777]: 2026-02-19 20:49:09.307 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:49:09 compute-0 nova_compute[188777]: 2026-02-19 20:49:09.308 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:49:09 compute-0 nova_compute[188777]: 2026-02-19 20:49:09.308 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:49:09 compute-0 nova_compute[188777]: 2026-02-19 20:49:09.386 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:49:09 compute-0 nova_compute[188777]: 2026-02-19 20:49:09.455 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:49:09 compute-0 nova_compute[188777]: 2026-02-19 20:49:09.456 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:49:09 compute-0 nova_compute[188777]: 2026-02-19 20:49:09.523 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:49:09 compute-0 nova_compute[188777]: 2026-02-19 20:49:09.529 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:49:09 compute-0 nova_compute[188777]: 2026-02-19 20:49:09.575 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:49:09 compute-0 nova_compute[188777]: 2026-02-19 20:49:09.576 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:49:09 compute-0 nova_compute[188777]: 2026-02-19 20:49:09.626 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:49:09 compute-0 nova_compute[188777]: 2026-02-19 20:49:09.632 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:49:09 compute-0 nova_compute[188777]: 2026-02-19 20:49:09.692 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:49:09 compute-0 nova_compute[188777]: 2026-02-19 20:49:09.693 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:49:09 compute-0 nova_compute[188777]: 2026-02-19 20:49:09.777 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:49:09 compute-0 nova_compute[188777]: 2026-02-19 20:49:09.783 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:49:09 compute-0 nova_compute[188777]: 2026-02-19 20:49:09.840 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:49:09 compute-0 nova_compute[188777]: 2026-02-19 20:49:09.841 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:49:09 compute-0 nova_compute[188777]: 2026-02-19 20:49:09.889 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:49:10 compute-0 nova_compute[188777]: 2026-02-19 20:49:10.240 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:49:10 compute-0 nova_compute[188777]: 2026-02-19 20:49:10.241 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4598MB free_disk=72.05465316772461GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:49:10 compute-0 nova_compute[188777]: 2026-02-19 20:49:10.241 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:49:10 compute-0 nova_compute[188777]: 2026-02-19 20:49:10.241 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:49:10 compute-0 nova_compute[188777]: 2026-02-19 20:49:10.330 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 997ebdcf-7eab-485b-8fbf-d21112c78946 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:49:10 compute-0 nova_compute[188777]: 2026-02-19 20:49:10.331 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance dff9d513-54f8-4d73-acf7-df610dc4d064 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:49:10 compute-0 nova_compute[188777]: 2026-02-19 20:49:10.331 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 1b6b1397-fda7-4470-883b-1cc5974fac84 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:49:10 compute-0 nova_compute[188777]: 2026-02-19 20:49:10.331 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance c7d04a5a-1e2f-40c2-a686-18b23a5bddfa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:49:10 compute-0 nova_compute[188777]: 2026-02-19 20:49:10.332 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:49:10 compute-0 nova_compute[188777]: 2026-02-19 20:49:10.332 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:49:10 compute-0 nova_compute[188777]: 2026-02-19 20:49:10.346 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Refreshing inventories for resource provider c266959e-952e-41ad-bc2e-56513f39ec2d _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 19 20:49:10 compute-0 nova_compute[188777]: 2026-02-19 20:49:10.364 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Updating ProviderTree inventory for provider c266959e-952e-41ad-bc2e-56513f39ec2d from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 19 20:49:10 compute-0 nova_compute[188777]: 2026-02-19 20:49:10.365 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Updating inventory in ProviderTree for provider c266959e-952e-41ad-bc2e-56513f39ec2d with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 19 20:49:10 compute-0 nova_compute[188777]: 2026-02-19 20:49:10.383 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Refreshing aggregate associations for resource provider c266959e-952e-41ad-bc2e-56513f39ec2d, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 19 20:49:10 compute-0 nova_compute[188777]: 2026-02-19 20:49:10.401 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Refreshing trait associations for resource provider c266959e-952e-41ad-bc2e-56513f39ec2d, traits: HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,HW_CPU_X86_SHA,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NODE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,HW_CPU_X86_F16C,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_ABM,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_MMX,HW_CPU_X86_BMI2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 19 20:49:10 compute-0 nova_compute[188777]: 2026-02-19 20:49:10.537 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:49:10 compute-0 nova_compute[188777]: 2026-02-19 20:49:10.552 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:49:10 compute-0 nova_compute[188777]: 2026-02-19 20:49:10.555 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:49:10 compute-0 nova_compute[188777]: 2026-02-19 20:49:10.555 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.314s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:49:11 compute-0 nova_compute[188777]: 2026-02-19 20:49:11.082 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:49:12 compute-0 nova_compute[188777]: 2026-02-19 20:49:12.551 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:49:13 compute-0 nova_compute[188777]: 2026-02-19 20:49:13.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:49:14 compute-0 nova_compute[188777]: 2026-02-19 20:49:14.176 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:49:14 compute-0 nova_compute[188777]: 2026-02-19 20:49:14.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:49:14 compute-0 sshd-session[258711]: Invalid user ubuntu from 103.250.11.249 port 58912
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.155 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.156 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.156 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.156 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fa4f6728800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.157 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.158 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.158 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.159 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.159 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.160 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.160 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.160 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'c7d04a5a-1e2f-40c2-a686-18b23a5bddfa', 'name': 'te-4749372-asg-gqiuwwiovj7t-a22kewlbuwbg-ig3ypn6zxo3u', 'flavor': {'id': '68c4e072-7c2b-48a1-8e07-0fd69e153270', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'e98a7b34-d7ef-4dcd-b1f3-0a369d480f18'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000d', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3e54c3b3dadc42fca16da4cb7212a2db', 'user_id': '4495bf20aedd42ff97fdae62ef729522', 'hostId': '22c8c0ddb7108a2907037af7b4f06c9d19e2238520664206bd96d609', 'status': 'active', 'metadata': {'metering.server_group': '08c5967c-a408-49e3-be73-425b7dd8ee8c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.161 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.162 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.162 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a390>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.163 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.163 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.164 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '997ebdcf-7eab-485b-8fbf-d21112c78946', 'name': 'tempest-AttachInterfacesUnderV243Test-server-684728485', 'flavor': {'id': '68c4e072-7c2b-48a1-8e07-0fd69e153270', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '17b9bce8-a91b-495d-ac33-cf63893413f9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000009', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '54ce0de2bf12421a9458013ccaa2dcad', 'user_id': '90c9e30d17534357bece36d1acaab39c', 'hostId': 'f46cf9989db3abf7517c94fba8fc996a8b55c81d8ccd61b23f3020bd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.164 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.165 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.165 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.166 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.166 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.167 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'dff9d513-54f8-4d73-acf7-df610dc4d064', 'name': 'tempest-TestNetworkBasicOps-server-215985627', 'flavor': {'id': '68c4e072-7c2b-48a1-8e07-0fd69e153270', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '17b9bce8-a91b-495d-ac33-cf63893413f9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'eb9e3732b9f4456d9f90bf3e156f6f7c', 'user_id': 'ef20d0162e404953a8f45beac9fadf18', 'hostId': 'f5b284f60221ec4908d310f9d0c4e0647a5dcc4e862839352782ffc8', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.167 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.168 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.168 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.169 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.169 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1b6b1397-fda7-4470-883b-1cc5974fac84', 'name': 'te-4749372-asg-gqiuwwiovj7t-inxwtqyxfrgl-i7ynim6swjio', 'flavor': {'id': '68c4e072-7c2b-48a1-8e07-0fd69e153270', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'e98a7b34-d7ef-4dcd-b1f3-0a369d480f18'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000c', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3e54c3b3dadc42fca16da4cb7212a2db', 'user_id': '4495bf20aedd42ff97fdae62ef729522', 'hostId': '22c8c0ddb7108a2907037af7b4f06c9d19e2238520664206bd96d609', 'status': 'active', 'metadata': {'metering.server_group': '08c5967c-a408-49e3-be73-425b7dd8ee8c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.170 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.170 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.170 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.170 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.170 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{'inspect_vnics': {}}], pollster history [{'network.outgoing.packets.error': [<NovaLikeServer: te-4749372-asg-gqiuwwiovj7t-a22kewlbuwbg-ig3ypn6zxo3u>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-684728485>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-215985627>, <NovaLikeServer: te-4749372-asg-gqiuwwiovj7t-inxwtqyxfrgl-i7ynim6swjio>]}], and discovery cache [{'local_instances': [<NovaLikeServer: te-4749372-asg-gqiuwwiovj7t-a22kewlbuwbg-ig3ypn6zxo3u>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-684728485>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-215985627>, <NovaLikeServer: te-4749372-asg-gqiuwwiovj7t-inxwtqyxfrgl-i7ynim6swjio>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.171 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-19T20:49:15.170847) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.172 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{'inspect_vnics': {}}], pollster history [{'network.outgoing.packets.error': [<NovaLikeServer: te-4749372-asg-gqiuwwiovj7t-a22kewlbuwbg-ig3ypn6zxo3u>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-684728485>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-215985627>, <NovaLikeServer: te-4749372-asg-gqiuwwiovj7t-inxwtqyxfrgl-i7ynim6swjio>]}], and discovery cache [{'local_instances': [<NovaLikeServer: te-4749372-asg-gqiuwwiovj7t-a22kewlbuwbg-ig3ypn6zxo3u>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-684728485>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-215985627>, <NovaLikeServer: te-4749372-asg-gqiuwwiovj7t-inxwtqyxfrgl-i7ynim6swjio>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.172 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{'inspect_vnics': {}}], pollster history [{'network.outgoing.packets.error': [<NovaLikeServer: te-4749372-asg-gqiuwwiovj7t-a22kewlbuwbg-ig3ypn6zxo3u>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-684728485>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-215985627>, <NovaLikeServer: te-4749372-asg-gqiuwwiovj7t-inxwtqyxfrgl-i7ynim6swjio>]}], and discovery cache [{'local_instances': [<NovaLikeServer: te-4749372-asg-gqiuwwiovj7t-a22kewlbuwbg-ig3ypn6zxo3u>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-684728485>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-215985627>, <NovaLikeServer: te-4749372-asg-gqiuwwiovj7t-inxwtqyxfrgl-i7ynim6swjio>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.172 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{'inspect_vnics': {}}], pollster history [{'network.outgoing.packets.error': [<NovaLikeServer: te-4749372-asg-gqiuwwiovj7t-a22kewlbuwbg-ig3ypn6zxo3u>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-684728485>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-215985627>, <NovaLikeServer: te-4749372-asg-gqiuwwiovj7t-inxwtqyxfrgl-i7ynim6swjio>]}], and discovery cache [{'local_instances': [<NovaLikeServer: te-4749372-asg-gqiuwwiovj7t-a22kewlbuwbg-ig3ypn6zxo3u>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-684728485>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-215985627>, <NovaLikeServer: te-4749372-asg-gqiuwwiovj7t-inxwtqyxfrgl-i7ynim6swjio>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.174 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.177 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.180 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.183 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.184 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.184 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fa4f672a480>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.184 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.184 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fa4f672a180>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.184 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.184 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.184 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.184 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.184 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.185 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.outgoing.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.185 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.outgoing.packets volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.185 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.186 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.186 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fa4f672bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.186 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.186 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-19T20:49:15.184650) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.186 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.186 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.186 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.186 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.187 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-19T20:49:15.186762) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.187 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.187 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.187 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.188 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.188 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fa4f672a270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.188 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.188 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.188 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.188 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.188 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.188 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-19T20:49:15.188487) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.188 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.outgoing.bytes volume: 3390 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.189 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.outgoing.bytes volume: 15886 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.189 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.189 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.189 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fa4f6728ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.189 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.189 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.190 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.190 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.190 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-19T20:49:15.190066) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.207 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.225 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.241 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.256 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.256 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.256 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fa4f672a300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.257 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.257 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.257 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.257 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.257 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.257 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-19T20:49:15.257359) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.257 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.258 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.258 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.258 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.258 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fa4f672ab70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.258 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.258 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.258 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.258 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.259 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-19T20:49:15.258923) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:49:15 compute-0 nova_compute[188777]: 2026-02-19 20:49:15.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.272 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.272 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.286 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.287 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.302 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.303 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.321 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.322 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.322 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.323 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fa4f6728290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.323 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.323 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.323 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.323 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.324 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-19T20:49:15.323706) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.358 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.read.bytes volume: 30808576 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.359 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.393 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.bytes volume: 30759424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.393 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 podman[258714]: 2026-02-19 20:49:15.403890324 +0000 UTC m=+0.086672069 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.buildah.version=1.33.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., release=1770267347, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9/ubi-minimal, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-minimal-container, build-date=2026-02-05T04:57:10Z)
Feb 19 20:49:15 compute-0 podman[258715]: 2026-02-19 20:49:15.413386201 +0000 UTC m=+0.092538952 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.429 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.bytes volume: 30591488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.429 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.bytes volume: 274750 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 sshd-session[258711]: Received disconnect from 103.250.11.249 port 58912:11: Bye Bye [preauth]
Feb 19 20:49:15 compute-0 sshd-session[258711]: Disconnected from invalid user ubuntu 103.250.11.249 port 58912 [preauth]
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.464 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.read.bytes volume: 31070720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.465 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.465 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.466 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fa4f69216a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.466 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.466 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.466 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.466 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.466 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/cpu volume: 333570000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.467 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-19T20:49:15.466617) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.467 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/cpu volume: 40720000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.467 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/cpu volume: 40540000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.467 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/cpu volume: 336440000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.468 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.468 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fa4f67286b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.468 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.469 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fa4f67283b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.469 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.469 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.469 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.469 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.470 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-19T20:49:15.469693) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.470 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.read.latency volume: 903986787 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.470 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.read.latency volume: 180241195 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.470 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.latency volume: 893810108 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.471 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.latency volume: 72441655 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.471 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.latency volume: 1154094577 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.471 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.latency volume: 68730024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.472 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.read.latency volume: 916964403 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.472 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.read.latency volume: 88997503 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.473 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.474 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fa4f672a120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.474 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.474 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.474 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.474 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.474 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-19T20:49:15.474577) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.475 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.475 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.475 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.476 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.476 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.477 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fa4f672a1b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.477 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.477 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.477 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.477 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.477 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.478 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.478 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.478 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.479 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.479 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fa4f6728410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.479 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-19T20:49:15.477786) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.479 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.479 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.480 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.480 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.480 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.read.requests volume: 1111 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.480 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.481 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.requests volume: 1111 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.481 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.481 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.requests volume: 1098 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.481 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.requests volume: 108 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.482 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.read.requests volume: 1136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.482 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.483 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.483 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fa4f672a150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.483 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.483 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.483 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-19T20:49:15.480273) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.483 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.483 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.484 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.484 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-19T20:49:15.483899) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.484 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.484 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.incoming.packets volume: 115 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.485 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.485 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.485 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fa4f6728470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.485 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.485 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.485 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.486 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.486 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.486 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-19T20:49:15.486034) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.486 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.486 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.487 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.487 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.487 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.488 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.usage volume: 30081024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.488 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.488 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.489 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fa4f68f6030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.489 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.489 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.489 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.489 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.489 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-19T20:49:15.489534) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.489 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.write.bytes volume: 73179136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.490 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.490 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.bytes volume: 73101312 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.490 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.490 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.bytes volume: 73109504 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.491 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.491 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.write.bytes volume: 73191424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.491 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.492 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.492 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fa4f672ab10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.492 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.492 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.492 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.492 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.492 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.493 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.493 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-19T20:49:15.492823) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.493 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.493 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.494 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.494 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.494 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.allocation volume: 31006720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.494 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.495 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.495 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fa4f6728500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.495 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.495 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.496 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.496 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.496 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.write.latency volume: 3745613470 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.496 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-19T20:49:15.496093) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.496 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.497 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.latency volume: 3085349853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.497 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.497 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.latency volume: 15219964748 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.497 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.498 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.write.latency volume: 3725974324 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.498 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.499 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.499 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fa4f672a0c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.499 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.499 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.499 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.499 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.499 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.500 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.500 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-19T20:49:15.499720) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.500 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.501 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.501 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.501 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fa4f6728560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.501 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.501 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.501 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.502 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.502 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.write.requests volume: 349 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.502 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.502 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.requests volume: 328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.502 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.502 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.requests volume: 306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.503 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.503 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.write.requests volume: 309 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.503 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.503 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.503 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fa4f67285c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.504 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.504 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.504 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.504 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.504 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.504 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fa4f6728620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.505 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.505 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.505 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.505 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.505 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-19T20:49:15.501997) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.505 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-19T20:49:15.504321) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.505 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.506 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fa4f672be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.506 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.506 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.506 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.506 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.506 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/memory.usage volume: 46.48828125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.506 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-19T20:49:15.505480) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.506 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/memory.usage volume: 42.71875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.506 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-19T20:49:15.506444) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.506 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/memory.usage volume: 42.80859375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.507 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/memory.usage volume: 42.234375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.507 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.507 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fa4f672be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.507 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.507 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.507 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.507 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.507 15 DEBUG ceilometer.compute.pollsters [-] c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.508 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.incoming.bytes volume: 4343 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.508 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.incoming.bytes volume: 20170 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.508 15 DEBUG ceilometer.compute.pollsters [-] 1b6b1397-fda7-4470-883b-1cc5974fac84/network.incoming.bytes volume: 2150 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.508 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-19T20:49:15.507824) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.508 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.509 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.509 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.509 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.509 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.509 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.509 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.509 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.509 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.509 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.509 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.509 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.509 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.510 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.510 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.510 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.510 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.510 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.510 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.510 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.510 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.510 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.510 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.510 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.510 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.510 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:49:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:49:15.510 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:49:16 compute-0 nova_compute[188777]: 2026-02-19 20:49:16.085 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:49:17 compute-0 podman[258756]: 2026-02-19 20:49:17.3938969 +0000 UTC m=+0.077541774 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:49:19 compute-0 nova_compute[188777]: 2026-02-19 20:49:19.179 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:49:21 compute-0 nova_compute[188777]: 2026-02-19 20:49:21.090 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:49:23 compute-0 podman[258777]: 2026-02-19 20:49:23.378582376 +0000 UTC m=+0.054799834 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 19 20:49:23 compute-0 podman[258776]: 2026-02-19 20:49:23.382522128 +0000 UTC m=+0.063537195 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, distribution-scope=public, name=ubi9, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.29.0, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, version=9.4, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 19 20:49:24 compute-0 nova_compute[188777]: 2026-02-19 20:49:24.181 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:49:24 compute-0 sshd-session[258813]: Invalid user deployer from 160.187.147.124 port 36216
Feb 19 20:49:25 compute-0 sshd-session[258813]: Received disconnect from 160.187.147.124 port 36216:11: Bye Bye [preauth]
Feb 19 20:49:25 compute-0 sshd-session[258813]: Disconnected from invalid user deployer 160.187.147.124 port 36216 [preauth]
Feb 19 20:49:26 compute-0 nova_compute[188777]: 2026-02-19 20:49:26.095 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:49:27 compute-0 podman[258815]: 2026-02-19 20:49:27.42783014 +0000 UTC m=+0.102024508 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 19 20:49:29 compute-0 nova_compute[188777]: 2026-02-19 20:49:29.183 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:49:29 compute-0 podman[258839]: 2026-02-19 20:49:29.398315937 +0000 UTC m=+0.086571195 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Feb 19 20:49:29 compute-0 podman[204724]: time="2026-02-19T20:49:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:49:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:49:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31705 "" "Go-http-client/1.1"
Feb 19 20:49:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:49:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5320 "" "Go-http-client/1.1"
Feb 19 20:49:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:49:30.470 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:49:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:49:30.471 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:49:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:49:30.472 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:49:31 compute-0 nova_compute[188777]: 2026-02-19 20:49:31.100 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:49:31 compute-0 openstack_network_exporter[207898]: ERROR   20:49:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:49:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:49:31 compute-0 openstack_network_exporter[207898]: ERROR   20:49:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:49:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:49:31 compute-0 podman[258859]: 2026-02-19 20:49:31.430998677 +0000 UTC m=+0.113157817 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 19 20:49:34 compute-0 nova_compute[188777]: 2026-02-19 20:49:34.185 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:49:36 compute-0 nova_compute[188777]: 2026-02-19 20:49:36.102 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:49:39 compute-0 nova_compute[188777]: 2026-02-19 20:49:39.187 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:49:39 compute-0 sshd-session[258886]: error: kex_exchange_identification: read: Connection reset by peer
Feb 19 20:49:39 compute-0 sshd-session[258886]: Connection reset by 176.120.22.52 port 26425
Feb 19 20:49:41 compute-0 nova_compute[188777]: 2026-02-19 20:49:41.106 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:49:44 compute-0 nova_compute[188777]: 2026-02-19 20:49:44.189 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:49:46 compute-0 nova_compute[188777]: 2026-02-19 20:49:46.108 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:49:46 compute-0 podman[258889]: 2026-02-19 20:49:46.385553015 +0000 UTC m=+0.063456334 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 19 20:49:46 compute-0 podman[258888]: 2026-02-19 20:49:46.401973917 +0000 UTC m=+0.088857636 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1770267347, container_name=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.33.7, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, version=9.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=)
Feb 19 20:49:48 compute-0 podman[258930]: 2026-02-19 20:49:48.390193027 +0000 UTC m=+0.070202744 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 19 20:49:49 compute-0 nova_compute[188777]: 2026-02-19 20:49:49.190 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:49:51 compute-0 nova_compute[188777]: 2026-02-19 20:49:51.111 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:49:54 compute-0 nova_compute[188777]: 2026-02-19 20:49:54.191 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:49:54 compute-0 podman[258951]: 2026-02-19 20:49:54.3843748 +0000 UTC m=+0.067911833 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 19 20:49:54 compute-0 podman[258950]: 2026-02-19 20:49:54.386721683 +0000 UTC m=+0.073950062 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, distribution-scope=public, io.buildah.version=1.29.0, release=1214.1726694543, vcs-type=git, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, config_id=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.openshift.expose-services=, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.4, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Feb 19 20:49:56 compute-0 nova_compute[188777]: 2026-02-19 20:49:56.114 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:49:58 compute-0 podman[258992]: 2026-02-19 20:49:58.408914832 +0000 UTC m=+0.087474813 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 19 20:49:59 compute-0 nova_compute[188777]: 2026-02-19 20:49:59.194 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:49:59 compute-0 sshd-session[258990]: Received disconnect from 154.12.80.151 port 52248:11: Bye Bye [preauth]
Feb 19 20:49:59 compute-0 sshd-session[258990]: Disconnected from authenticating user root 154.12.80.151 port 52248 [preauth]
Feb 19 20:49:59 compute-0 podman[204724]: time="2026-02-19T20:49:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:49:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:49:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31705 "" "Go-http-client/1.1"
Feb 19 20:49:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:49:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5315 "" "Go-http-client/1.1"
Feb 19 20:50:00 compute-0 podman[259015]: 2026-02-19 20:50:00.397217695 +0000 UTC m=+0.087979539 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS)
Feb 19 20:50:01 compute-0 nova_compute[188777]: 2026-02-19 20:50:01.118 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:01 compute-0 openstack_network_exporter[207898]: ERROR   20:50:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:50:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:50:01 compute-0 openstack_network_exporter[207898]: ERROR   20:50:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:50:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:50:02 compute-0 nova_compute[188777]: 2026-02-19 20:50:02.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:50:02 compute-0 podman[259035]: 2026-02-19 20:50:02.488492095 +0000 UTC m=+0.166234874 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 19 20:50:04 compute-0 nova_compute[188777]: 2026-02-19 20:50:04.196 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:04 compute-0 nova_compute[188777]: 2026-02-19 20:50:04.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:50:04 compute-0 nova_compute[188777]: 2026-02-19 20:50:04.265 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:50:05 compute-0 nova_compute[188777]: 2026-02-19 20:50:05.054 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "refresh_cache-dff9d513-54f8-4d73-acf7-df610dc4d064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:50:05 compute-0 nova_compute[188777]: 2026-02-19 20:50:05.054 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquired lock "refresh_cache-dff9d513-54f8-4d73-acf7-df610dc4d064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:50:05 compute-0 nova_compute[188777]: 2026-02-19 20:50:05.054 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 19 20:50:06 compute-0 nova_compute[188777]: 2026-02-19 20:50:06.122 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:06 compute-0 nova_compute[188777]: 2026-02-19 20:50:06.492 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Updating instance_info_cache with network_info: [{"id": "913d86d2-685f-4393-9143-efa6e9c6941a", "address": "fa:16:3e:c2:a8:ee", "network": {"id": "2194f0b2-0b56-4fa1-a2f7-0ec7651876c4", "bridge": "br-int", "label": "tempest-network-smoke--1477620676", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb9e3732b9f4456d9f90bf3e156f6f7c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap913d86d2-68", "ovs_interfaceid": "913d86d2-685f-4393-9143-efa6e9c6941a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:50:06 compute-0 nova_compute[188777]: 2026-02-19 20:50:06.506 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Releasing lock "refresh_cache-dff9d513-54f8-4d73-acf7-df610dc4d064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:50:06 compute-0 nova_compute[188777]: 2026-02-19 20:50:06.506 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: dff9d513-54f8-4d73-acf7-df610dc4d064] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 19 20:50:06 compute-0 nova_compute[188777]: 2026-02-19 20:50:06.507 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:50:06 compute-0 nova_compute[188777]: 2026-02-19 20:50:06.507 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:50:06 compute-0 nova_compute[188777]: 2026-02-19 20:50:06.507 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:50:09 compute-0 nova_compute[188777]: 2026-02-19 20:50:09.199 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:09 compute-0 nova_compute[188777]: 2026-02-19 20:50:09.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:50:09 compute-0 nova_compute[188777]: 2026-02-19 20:50:09.290 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:50:09 compute-0 nova_compute[188777]: 2026-02-19 20:50:09.291 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:50:09 compute-0 nova_compute[188777]: 2026-02-19 20:50:09.291 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:50:09 compute-0 nova_compute[188777]: 2026-02-19 20:50:09.292 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:50:09 compute-0 nova_compute[188777]: 2026-02-19 20:50:09.393 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:50:09 compute-0 nova_compute[188777]: 2026-02-19 20:50:09.468 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:50:09 compute-0 nova_compute[188777]: 2026-02-19 20:50:09.470 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:50:09 compute-0 nova_compute[188777]: 2026-02-19 20:50:09.540 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:50:09 compute-0 nova_compute[188777]: 2026-02-19 20:50:09.549 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:50:09 compute-0 nova_compute[188777]: 2026-02-19 20:50:09.619 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:50:09 compute-0 nova_compute[188777]: 2026-02-19 20:50:09.621 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:50:09 compute-0 nova_compute[188777]: 2026-02-19 20:50:09.684 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:50:09 compute-0 nova_compute[188777]: 2026-02-19 20:50:09.692 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:50:09 compute-0 nova_compute[188777]: 2026-02-19 20:50:09.755 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:50:09 compute-0 nova_compute[188777]: 2026-02-19 20:50:09.756 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:50:09 compute-0 nova_compute[188777]: 2026-02-19 20:50:09.809 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:50:09 compute-0 nova_compute[188777]: 2026-02-19 20:50:09.816 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:50:09 compute-0 nova_compute[188777]: 2026-02-19 20:50:09.864 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:50:09 compute-0 nova_compute[188777]: 2026-02-19 20:50:09.865 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:50:09 compute-0 nova_compute[188777]: 2026-02-19 20:50:09.945 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:50:10 compute-0 nova_compute[188777]: 2026-02-19 20:50:10.283 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:50:10 compute-0 nova_compute[188777]: 2026-02-19 20:50:10.284 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4592MB free_disk=72.05465316772461GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:50:10 compute-0 nova_compute[188777]: 2026-02-19 20:50:10.284 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:50:10 compute-0 nova_compute[188777]: 2026-02-19 20:50:10.285 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:50:10 compute-0 nova_compute[188777]: 2026-02-19 20:50:10.451 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 997ebdcf-7eab-485b-8fbf-d21112c78946 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:50:10 compute-0 nova_compute[188777]: 2026-02-19 20:50:10.452 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance dff9d513-54f8-4d73-acf7-df610dc4d064 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:50:10 compute-0 nova_compute[188777]: 2026-02-19 20:50:10.452 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 1b6b1397-fda7-4470-883b-1cc5974fac84 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:50:10 compute-0 nova_compute[188777]: 2026-02-19 20:50:10.452 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance c7d04a5a-1e2f-40c2-a686-18b23a5bddfa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:50:10 compute-0 nova_compute[188777]: 2026-02-19 20:50:10.453 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:50:10 compute-0 nova_compute[188777]: 2026-02-19 20:50:10.453 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:50:10 compute-0 nova_compute[188777]: 2026-02-19 20:50:10.589 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:50:10 compute-0 nova_compute[188777]: 2026-02-19 20:50:10.603 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:50:10 compute-0 nova_compute[188777]: 2026-02-19 20:50:10.604 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:50:10 compute-0 nova_compute[188777]: 2026-02-19 20:50:10.605 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.320s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:50:11 compute-0 nova_compute[188777]: 2026-02-19 20:50:11.126 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:13 compute-0 nova_compute[188777]: 2026-02-19 20:50:13.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:50:13 compute-0 nova_compute[188777]: 2026-02-19 20:50:13.266 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:50:13 compute-0 nova_compute[188777]: 2026-02-19 20:50:13.266 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:50:13 compute-0 nova_compute[188777]: 2026-02-19 20:50:13.267 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 19 20:50:14 compute-0 nova_compute[188777]: 2026-02-19 20:50:14.201 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:14 compute-0 nova_compute[188777]: 2026-02-19 20:50:14.280 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:50:15 compute-0 sshd-session[259083]: Invalid user roman from 158.180.74.7 port 19894
Feb 19 20:50:15 compute-0 sshd-session[259083]: Received disconnect from 158.180.74.7 port 19894:11: Bye Bye [preauth]
Feb 19 20:50:15 compute-0 sshd-session[259083]: Disconnected from invalid user roman 158.180.74.7 port 19894 [preauth]
Feb 19 20:50:16 compute-0 nova_compute[188777]: 2026-02-19 20:50:16.131 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:17 compute-0 nova_compute[188777]: 2026-02-19 20:50:17.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:50:17 compute-0 podman[259086]: 2026-02-19 20:50:17.413289054 +0000 UTC m=+0.079149054 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 19 20:50:17 compute-0 podman[259085]: 2026-02-19 20:50:17.415468882 +0000 UTC m=+0.085424289 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., managed_by=edpm_ansible, org.opencontainers.image.created=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., version=9.7, container_name=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.openshift.expose-services=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, distribution-scope=public, architecture=x86_64)
Feb 19 20:50:18 compute-0 nova_compute[188777]: 2026-02-19 20:50:18.929 188781 DEBUG oslo_concurrency.lockutils [None req-a800971f-5aa6-441b-a4f2-356ca7c12a37 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Acquiring lock "1b6b1397-fda7-4470-883b-1cc5974fac84" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:50:18 compute-0 nova_compute[188777]: 2026-02-19 20:50:18.929 188781 DEBUG oslo_concurrency.lockutils [None req-a800971f-5aa6-441b-a4f2-356ca7c12a37 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lock "1b6b1397-fda7-4470-883b-1cc5974fac84" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:50:18 compute-0 nova_compute[188777]: 2026-02-19 20:50:18.930 188781 DEBUG oslo_concurrency.lockutils [None req-a800971f-5aa6-441b-a4f2-356ca7c12a37 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Acquiring lock "1b6b1397-fda7-4470-883b-1cc5974fac84-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:50:18 compute-0 nova_compute[188777]: 2026-02-19 20:50:18.930 188781 DEBUG oslo_concurrency.lockutils [None req-a800971f-5aa6-441b-a4f2-356ca7c12a37 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lock "1b6b1397-fda7-4470-883b-1cc5974fac84-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:50:18 compute-0 nova_compute[188777]: 2026-02-19 20:50:18.930 188781 DEBUG oslo_concurrency.lockutils [None req-a800971f-5aa6-441b-a4f2-356ca7c12a37 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lock "1b6b1397-fda7-4470-883b-1cc5974fac84-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:50:18 compute-0 nova_compute[188777]: 2026-02-19 20:50:18.932 188781 INFO nova.compute.manager [None req-a800971f-5aa6-441b-a4f2-356ca7c12a37 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Terminating instance
Feb 19 20:50:18 compute-0 nova_compute[188777]: 2026-02-19 20:50:18.933 188781 DEBUG nova.compute.manager [None req-a800971f-5aa6-441b-a4f2-356ca7c12a37 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 19 20:50:18 compute-0 kernel: tap3b9e0369-31 (unregistering): left promiscuous mode
Feb 19 20:50:18 compute-0 NetworkManager[57033]: <info>  [1771534218.9707] device (tap3b9e0369-31): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 19 20:50:18 compute-0 nova_compute[188777]: 2026-02-19 20:50:18.982 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:18 compute-0 ovn_controller[98843]: 2026-02-19T20:50:18Z|00149|binding|INFO|Releasing lport 3b9e0369-31ef-4446-b291-70f0cbddeb63 from this chassis (sb_readonly=0)
Feb 19 20:50:18 compute-0 ovn_controller[98843]: 2026-02-19T20:50:18Z|00150|binding|INFO|Setting lport 3b9e0369-31ef-4446-b291-70f0cbddeb63 down in Southbound
Feb 19 20:50:18 compute-0 ovn_controller[98843]: 2026-02-19T20:50:18Z|00151|binding|INFO|Removing iface tap3b9e0369-31 ovn-installed in OVS
Feb 19 20:50:18 compute-0 nova_compute[188777]: 2026-02-19 20:50:18.985 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:18 compute-0 nova_compute[188777]: 2026-02-19 20:50:18.988 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:18 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:50:18.995 108175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:ea:b9 10.100.1.142'], port_security=['fa:16:3e:56:ea:b9 10.100.1.142'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.142/16', 'neutron:device_id': '1b6b1397-fda7-4470-883b-1cc5974fac84', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03b0387c-cb4d-416d-b212-4d980b66cbe2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3e54c3b3dadc42fca16da4cb7212a2db', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c84042e2-5094-46cb-8818-ed6fb8d69afe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2e658df7-7d87-44f0-8690-f7f2e1d7b0ae, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>], logical_port=3b9e0369-31ef-4446-b291-70f0cbddeb63) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 19 20:50:19 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:50:18.999 108175 INFO neutron.agent.ovn.metadata.agent [-] Port 3b9e0369-31ef-4446-b291-70f0cbddeb63 in datapath 03b0387c-cb4d-416d-b212-4d980b66cbe2 unbound from our chassis
Feb 19 20:50:19 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:50:19.004 108175 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 03b0387c-cb4d-416d-b212-4d980b66cbe2
Feb 19 20:50:19 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Feb 19 20:50:19 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Consumed 6min 50.949s CPU time.
Feb 19 20:50:19 compute-0 systemd-machined[158158]: Machine qemu-12-instance-0000000c terminated.
Feb 19 20:50:19 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:50:19.018 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[8f9d1971-8856-46a1-91c7-603271eeb98b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:50:19 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:50:19.045 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[73d7301f-eb53-4cfe-8d79-946207465894]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:50:19 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:50:19.048 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[58bf7d07-5421-4b82-8caa-337469261e20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:50:19 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:50:19.070 242224 DEBUG oslo.privsep.daemon [-] privsep: reply[0028d620-0864-49c1-8be6-8fd04e4e59cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:50:19 compute-0 podman[259127]: 2026-02-19 20:50:19.07777675 +0000 UTC m=+0.078993498 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true)
Feb 19 20:50:19 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:50:19.085 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[7982d4d8-3a4f-488a-ae37-584b1bfbf0cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap03b0387c-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:70:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 1960, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 1960, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493020, 'reachable_time': 39256, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259157, 'error': None, 'target': 'ovnmeta-03b0387c-cb4d-416d-b212-4d980b66cbe2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:50:19 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:50:19.097 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[b73ac615-bda0-41bf-8157-688bc7dae5e4]: (4, ({'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tap03b0387c-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493027, 'tstamp': 493027}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259158, 'error': None, 'target': 'ovnmeta-03b0387c-cb4d-416d-b212-4d980b66cbe2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap03b0387c-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493030, 'tstamp': 493030}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259158, 'error': None, 'target': 'ovnmeta-03b0387c-cb4d-416d-b212-4d980b66cbe2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:50:19 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:50:19.099 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03b0387c-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:50:19 compute-0 nova_compute[188777]: 2026-02-19 20:50:19.100 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:19 compute-0 nova_compute[188777]: 2026-02-19 20:50:19.105 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:19 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:50:19.106 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap03b0387c-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:50:19 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:50:19.106 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 19 20:50:19 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:50:19.106 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap03b0387c-c0, col_values=(('external_ids', {'iface-id': 'ac510fcf-4783-4f81-b107-f5dac80c5fad'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:50:19 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:50:19.107 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 19 20:50:19 compute-0 nova_compute[188777]: 2026-02-19 20:50:19.153 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:19 compute-0 nova_compute[188777]: 2026-02-19 20:50:19.158 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:19 compute-0 nova_compute[188777]: 2026-02-19 20:50:19.189 188781 INFO nova.virt.libvirt.driver [-] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Instance destroyed successfully.
Feb 19 20:50:19 compute-0 nova_compute[188777]: 2026-02-19 20:50:19.190 188781 DEBUG nova.objects.instance [None req-a800971f-5aa6-441b-a4f2-356ca7c12a37 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lazy-loading 'resources' on Instance uuid 1b6b1397-fda7-4470-883b-1cc5974fac84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:50:19 compute-0 nova_compute[188777]: 2026-02-19 20:50:19.202 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:19 compute-0 nova_compute[188777]: 2026-02-19 20:50:19.263 188781 DEBUG nova.virt.libvirt.vif [None req-a800971f-5aa6-441b-a4f2-356ca7c12a37 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-19T20:36:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-4749372-asg-gqiuwwiovj7t-inxwtqyxfrgl-i7ynim6swjio',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-4749372-asg-gqiuwwiovj7t-inxwtqyxfrgl-i7ynim6swjio',id=12,image_ref='e98a7b34-d7ef-4dcd-b1f3-0a369d480f18',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-19T20:36:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='08c5967c-a408-49e3-be73-425b7dd8ee8c'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3e54c3b3dadc42fca16da4cb7212a2db',ramdisk_id='',reservation_id='r-1jws38c4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e98a7b34-d7ef-4dcd-b1f3-0a369d480f18',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-653304289',owner_user_name='tempest-PrometheusGabbiTest-653304289-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-19T20:36:59Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='4495bf20aedd42ff97fdae62ef729522',uuid=1b6b1397-fda7-4470-883b-1cc5974fac84,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3b9e0369-31ef-4446-b291-70f0cbddeb63", "address": "fa:16:3e:56:ea:b9", "network": {"id": "03b0387c-cb4d-416d-b212-4d980b66cbe2", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e54c3b3dadc42fca16da4cb7212a2db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b9e0369-31", "ovs_interfaceid": "3b9e0369-31ef-4446-b291-70f0cbddeb63", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 19 20:50:19 compute-0 nova_compute[188777]: 2026-02-19 20:50:19.263 188781 DEBUG nova.network.os_vif_util [None req-a800971f-5aa6-441b-a4f2-356ca7c12a37 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Converting VIF {"id": "3b9e0369-31ef-4446-b291-70f0cbddeb63", "address": "fa:16:3e:56:ea:b9", "network": {"id": "03b0387c-cb4d-416d-b212-4d980b66cbe2", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e54c3b3dadc42fca16da4cb7212a2db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b9e0369-31", "ovs_interfaceid": "3b9e0369-31ef-4446-b291-70f0cbddeb63", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 19 20:50:19 compute-0 nova_compute[188777]: 2026-02-19 20:50:19.264 188781 DEBUG nova.network.os_vif_util [None req-a800971f-5aa6-441b-a4f2-356ca7c12a37 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:56:ea:b9,bridge_name='br-int',has_traffic_filtering=True,id=3b9e0369-31ef-4446-b291-70f0cbddeb63,network=Network(03b0387c-cb4d-416d-b212-4d980b66cbe2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b9e0369-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 19 20:50:19 compute-0 nova_compute[188777]: 2026-02-19 20:50:19.265 188781 DEBUG os_vif [None req-a800971f-5aa6-441b-a4f2-356ca7c12a37 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:56:ea:b9,bridge_name='br-int',has_traffic_filtering=True,id=3b9e0369-31ef-4446-b291-70f0cbddeb63,network=Network(03b0387c-cb4d-416d-b212-4d980b66cbe2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b9e0369-31') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 19 20:50:19 compute-0 nova_compute[188777]: 2026-02-19 20:50:19.268 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:19 compute-0 nova_compute[188777]: 2026-02-19 20:50:19.268 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b9e0369-31, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:50:19 compute-0 nova_compute[188777]: 2026-02-19 20:50:19.270 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:19 compute-0 nova_compute[188777]: 2026-02-19 20:50:19.272 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:19 compute-0 nova_compute[188777]: 2026-02-19 20:50:19.275 188781 INFO os_vif [None req-a800971f-5aa6-441b-a4f2-356ca7c12a37 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:56:ea:b9,bridge_name='br-int',has_traffic_filtering=True,id=3b9e0369-31ef-4446-b291-70f0cbddeb63,network=Network(03b0387c-cb4d-416d-b212-4d980b66cbe2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b9e0369-31')
Feb 19 20:50:19 compute-0 nova_compute[188777]: 2026-02-19 20:50:19.276 188781 INFO nova.virt.libvirt.driver [None req-a800971f-5aa6-441b-a4f2-356ca7c12a37 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Deleting instance files /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84_del
Feb 19 20:50:19 compute-0 nova_compute[188777]: 2026-02-19 20:50:19.276 188781 INFO nova.virt.libvirt.driver [None req-a800971f-5aa6-441b-a4f2-356ca7c12a37 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Deletion of /var/lib/nova/instances/1b6b1397-fda7-4470-883b-1cc5974fac84_del complete
Feb 19 20:50:19 compute-0 nova_compute[188777]: 2026-02-19 20:50:19.333 188781 INFO nova.compute.manager [None req-a800971f-5aa6-441b-a4f2-356ca7c12a37 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Took 0.40 seconds to destroy the instance on the hypervisor.
Feb 19 20:50:19 compute-0 nova_compute[188777]: 2026-02-19 20:50:19.334 188781 DEBUG oslo.service.loopingcall [None req-a800971f-5aa6-441b-a4f2-356ca7c12a37 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 19 20:50:19 compute-0 nova_compute[188777]: 2026-02-19 20:50:19.334 188781 DEBUG nova.compute.manager [-] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 19 20:50:19 compute-0 nova_compute[188777]: 2026-02-19 20:50:19.334 188781 DEBUG nova.network.neutron [-] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 19 20:50:19 compute-0 nova_compute[188777]: 2026-02-19 20:50:19.342 188781 DEBUG nova.compute.manager [req-4b19dee5-50ac-4481-ad1f-9bdea4df32c0 req-3987ba51-8c74-48c1-93ff-ecb2a7d588e2 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Received event network-vif-unplugged-3b9e0369-31ef-4446-b291-70f0cbddeb63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:50:19 compute-0 nova_compute[188777]: 2026-02-19 20:50:19.343 188781 DEBUG oslo_concurrency.lockutils [req-4b19dee5-50ac-4481-ad1f-9bdea4df32c0 req-3987ba51-8c74-48c1-93ff-ecb2a7d588e2 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "1b6b1397-fda7-4470-883b-1cc5974fac84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:50:19 compute-0 nova_compute[188777]: 2026-02-19 20:50:19.343 188781 DEBUG oslo_concurrency.lockutils [req-4b19dee5-50ac-4481-ad1f-9bdea4df32c0 req-3987ba51-8c74-48c1-93ff-ecb2a7d588e2 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "1b6b1397-fda7-4470-883b-1cc5974fac84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:50:19 compute-0 nova_compute[188777]: 2026-02-19 20:50:19.343 188781 DEBUG oslo_concurrency.lockutils [req-4b19dee5-50ac-4481-ad1f-9bdea4df32c0 req-3987ba51-8c74-48c1-93ff-ecb2a7d588e2 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "1b6b1397-fda7-4470-883b-1cc5974fac84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:50:19 compute-0 nova_compute[188777]: 2026-02-19 20:50:19.343 188781 DEBUG nova.compute.manager [req-4b19dee5-50ac-4481-ad1f-9bdea4df32c0 req-3987ba51-8c74-48c1-93ff-ecb2a7d588e2 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] No waiting events found dispatching network-vif-unplugged-3b9e0369-31ef-4446-b291-70f0cbddeb63 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 19 20:50:19 compute-0 nova_compute[188777]: 2026-02-19 20:50:19.343 188781 DEBUG nova.compute.manager [req-4b19dee5-50ac-4481-ad1f-9bdea4df32c0 req-3987ba51-8c74-48c1-93ff-ecb2a7d588e2 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Received event network-vif-unplugged-3b9e0369-31ef-4446-b291-70f0cbddeb63 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 19 20:50:19 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:50:19.494 108175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1e:ad:15', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:0d:ba:1d:25:53'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 19 20:50:19 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:50:19.495 108175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 19 20:50:19 compute-0 nova_compute[188777]: 2026-02-19 20:50:19.495 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:20 compute-0 nova_compute[188777]: 2026-02-19 20:50:20.344 188781 DEBUG nova.network.neutron [-] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:50:20 compute-0 nova_compute[188777]: 2026-02-19 20:50:20.362 188781 INFO nova.compute.manager [-] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Took 1.03 seconds to deallocate network for instance.
Feb 19 20:50:20 compute-0 nova_compute[188777]: 2026-02-19 20:50:20.401 188781 DEBUG oslo_concurrency.lockutils [None req-a800971f-5aa6-441b-a4f2-356ca7c12a37 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:50:20 compute-0 nova_compute[188777]: 2026-02-19 20:50:20.401 188781 DEBUG oslo_concurrency.lockutils [None req-a800971f-5aa6-441b-a4f2-356ca7c12a37 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:50:20 compute-0 nova_compute[188777]: 2026-02-19 20:50:20.434 188781 DEBUG nova.compute.manager [req-7ac04b02-7145-4a93-9dad-aae393220001 req-9acf2a02-e0f6-4d5b-ae40-71108972707e 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Received event network-vif-deleted-3b9e0369-31ef-4446-b291-70f0cbddeb63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:50:20 compute-0 nova_compute[188777]: 2026-02-19 20:50:20.514 188781 DEBUG nova.compute.provider_tree [None req-a800971f-5aa6-441b-a4f2-356ca7c12a37 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:50:20 compute-0 nova_compute[188777]: 2026-02-19 20:50:20.528 188781 DEBUG nova.scheduler.client.report [None req-a800971f-5aa6-441b-a4f2-356ca7c12a37 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:50:20 compute-0 nova_compute[188777]: 2026-02-19 20:50:20.545 188781 DEBUG oslo_concurrency.lockutils [None req-a800971f-5aa6-441b-a4f2-356ca7c12a37 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.143s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:50:20 compute-0 nova_compute[188777]: 2026-02-19 20:50:20.564 188781 INFO nova.scheduler.client.report [None req-a800971f-5aa6-441b-a4f2-356ca7c12a37 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Deleted allocations for instance 1b6b1397-fda7-4470-883b-1cc5974fac84
Feb 19 20:50:20 compute-0 nova_compute[188777]: 2026-02-19 20:50:20.617 188781 DEBUG oslo_concurrency.lockutils [None req-a800971f-5aa6-441b-a4f2-356ca7c12a37 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lock "1b6b1397-fda7-4470-883b-1cc5974fac84" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:50:21 compute-0 nova_compute[188777]: 2026-02-19 20:50:21.430 188781 DEBUG nova.compute.manager [req-99344875-e5b2-49b7-8151-5ea281cf04a3 req-1c22ad23-4116-4463-9715-9e30c9be09ef 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Received event network-vif-plugged-3b9e0369-31ef-4446-b291-70f0cbddeb63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:50:21 compute-0 nova_compute[188777]: 2026-02-19 20:50:21.431 188781 DEBUG oslo_concurrency.lockutils [req-99344875-e5b2-49b7-8151-5ea281cf04a3 req-1c22ad23-4116-4463-9715-9e30c9be09ef 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "1b6b1397-fda7-4470-883b-1cc5974fac84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:50:21 compute-0 nova_compute[188777]: 2026-02-19 20:50:21.431 188781 DEBUG oslo_concurrency.lockutils [req-99344875-e5b2-49b7-8151-5ea281cf04a3 req-1c22ad23-4116-4463-9715-9e30c9be09ef 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "1b6b1397-fda7-4470-883b-1cc5974fac84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:50:21 compute-0 nova_compute[188777]: 2026-02-19 20:50:21.431 188781 DEBUG oslo_concurrency.lockutils [req-99344875-e5b2-49b7-8151-5ea281cf04a3 req-1c22ad23-4116-4463-9715-9e30c9be09ef 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "1b6b1397-fda7-4470-883b-1cc5974fac84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:50:21 compute-0 nova_compute[188777]: 2026-02-19 20:50:21.432 188781 DEBUG nova.compute.manager [req-99344875-e5b2-49b7-8151-5ea281cf04a3 req-1c22ad23-4116-4463-9715-9e30c9be09ef 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] No waiting events found dispatching network-vif-plugged-3b9e0369-31ef-4446-b291-70f0cbddeb63 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 19 20:50:21 compute-0 nova_compute[188777]: 2026-02-19 20:50:21.432 188781 WARNING nova.compute.manager [req-99344875-e5b2-49b7-8151-5ea281cf04a3 req-1c22ad23-4116-4463-9715-9e30c9be09ef 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Received unexpected event network-vif-plugged-3b9e0369-31ef-4446-b291-70f0cbddeb63 for instance with vm_state deleted and task_state None.
Feb 19 20:50:24 compute-0 nova_compute[188777]: 2026-02-19 20:50:24.204 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:24 compute-0 nova_compute[188777]: 2026-02-19 20:50:24.270 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:25 compute-0 podman[259177]: 2026-02-19 20:50:25.417450886 +0000 UTC m=+0.089699973 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Feb 19 20:50:25 compute-0 podman[259176]: 2026-02-19 20:50:25.42173761 +0000 UTC m=+0.093726779 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, managed_by=edpm_ansible, container_name=kepler, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, com.redhat.component=ubi9-container, config_id=kepler, name=ubi9, release-0.7.12=, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Feb 19 20:50:25 compute-0 nova_compute[188777]: 2026-02-19 20:50:25.580 188781 DEBUG oslo_concurrency.lockutils [None req-6a262b54-9b64-4ada-8f5a-124b21ec59ff 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Acquiring lock "c7d04a5a-1e2f-40c2-a686-18b23a5bddfa" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:50:25 compute-0 nova_compute[188777]: 2026-02-19 20:50:25.581 188781 DEBUG oslo_concurrency.lockutils [None req-6a262b54-9b64-4ada-8f5a-124b21ec59ff 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lock "c7d04a5a-1e2f-40c2-a686-18b23a5bddfa" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:50:25 compute-0 nova_compute[188777]: 2026-02-19 20:50:25.582 188781 DEBUG oslo_concurrency.lockutils [None req-6a262b54-9b64-4ada-8f5a-124b21ec59ff 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Acquiring lock "c7d04a5a-1e2f-40c2-a686-18b23a5bddfa-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:50:25 compute-0 nova_compute[188777]: 2026-02-19 20:50:25.583 188781 DEBUG oslo_concurrency.lockutils [None req-6a262b54-9b64-4ada-8f5a-124b21ec59ff 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lock "c7d04a5a-1e2f-40c2-a686-18b23a5bddfa-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:50:25 compute-0 nova_compute[188777]: 2026-02-19 20:50:25.584 188781 DEBUG oslo_concurrency.lockutils [None req-6a262b54-9b64-4ada-8f5a-124b21ec59ff 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lock "c7d04a5a-1e2f-40c2-a686-18b23a5bddfa-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:50:25 compute-0 nova_compute[188777]: 2026-02-19 20:50:25.586 188781 INFO nova.compute.manager [None req-6a262b54-9b64-4ada-8f5a-124b21ec59ff 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Terminating instance
Feb 19 20:50:25 compute-0 nova_compute[188777]: 2026-02-19 20:50:25.587 188781 DEBUG nova.compute.manager [None req-6a262b54-9b64-4ada-8f5a-124b21ec59ff 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 19 20:50:25 compute-0 kernel: tap6730c115-fc (unregistering): left promiscuous mode
Feb 19 20:50:25 compute-0 NetworkManager[57033]: <info>  [1771534225.6349] device (tap6730c115-fc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 19 20:50:25 compute-0 ovn_controller[98843]: 2026-02-19T20:50:25Z|00152|binding|INFO|Releasing lport 6730c115-fc6d-4fab-9c7d-1f6f4bd9e878 from this chassis (sb_readonly=0)
Feb 19 20:50:25 compute-0 ovn_controller[98843]: 2026-02-19T20:50:25Z|00153|binding|INFO|Setting lport 6730c115-fc6d-4fab-9c7d-1f6f4bd9e878 down in Southbound
Feb 19 20:50:25 compute-0 nova_compute[188777]: 2026-02-19 20:50:25.647 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:25 compute-0 ovn_controller[98843]: 2026-02-19T20:50:25Z|00154|binding|INFO|Removing iface tap6730c115-fc ovn-installed in OVS
Feb 19 20:50:25 compute-0 nova_compute[188777]: 2026-02-19 20:50:25.651 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:25 compute-0 nova_compute[188777]: 2026-02-19 20:50:25.656 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:50:25.655 108175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:4e:00 10.100.3.124'], port_security=['fa:16:3e:b9:4e:00 10.100.3.124'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.3.124/16', 'neutron:device_id': 'c7d04a5a-1e2f-40c2-a686-18b23a5bddfa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03b0387c-cb4d-416d-b212-4d980b66cbe2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3e54c3b3dadc42fca16da4cb7212a2db', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c84042e2-5094-46cb-8818-ed6fb8d69afe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2e658df7-7d87-44f0-8690-f7f2e1d7b0ae, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>], logical_port=6730c115-fc6d-4fab-9c7d-1f6f4bd9e878) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc014bf2790>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 19 20:50:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:50:25.657 108175 INFO neutron.agent.ovn.metadata.agent [-] Port 6730c115-fc6d-4fab-9c7d-1f6f4bd9e878 in datapath 03b0387c-cb4d-416d-b212-4d980b66cbe2 unbound from our chassis
Feb 19 20:50:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:50:25.659 108175 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 03b0387c-cb4d-416d-b212-4d980b66cbe2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 19 20:50:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:50:25.661 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[fa4a6e01-3e55-4c0d-9db8-7ab5ea9d040b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:50:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:50:25.662 108175 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-03b0387c-cb4d-416d-b212-4d980b66cbe2 namespace which is not needed anymore
Feb 19 20:50:25 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Feb 19 20:50:25 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000d.scope: Consumed 6min 26.011s CPU time.
Feb 19 20:50:25 compute-0 systemd-machined[158158]: Machine qemu-14-instance-0000000d terminated.
Feb 19 20:50:25 compute-0 neutron-haproxy-ovnmeta-03b0387c-cb4d-416d-b212-4d980b66cbe2[253879]: [NOTICE]   (253883) : haproxy version is 2.8.14-c23fe91
Feb 19 20:50:25 compute-0 neutron-haproxy-ovnmeta-03b0387c-cb4d-416d-b212-4d980b66cbe2[253879]: [NOTICE]   (253883) : path to executable is /usr/sbin/haproxy
Feb 19 20:50:25 compute-0 neutron-haproxy-ovnmeta-03b0387c-cb4d-416d-b212-4d980b66cbe2[253879]: [WARNING]  (253883) : Exiting Master process...
Feb 19 20:50:25 compute-0 neutron-haproxy-ovnmeta-03b0387c-cb4d-416d-b212-4d980b66cbe2[253879]: [ALERT]    (253883) : Current worker (253885) exited with code 143 (Terminated)
Feb 19 20:50:25 compute-0 neutron-haproxy-ovnmeta-03b0387c-cb4d-416d-b212-4d980b66cbe2[253879]: [WARNING]  (253883) : All workers exited. Exiting... (0)
Feb 19 20:50:25 compute-0 systemd[1]: libpod-c053a1325ad2e4193d5a2754f73a1af14d09ac02a19a79e8dc8fdabad7c22855.scope: Deactivated successfully.
Feb 19 20:50:25 compute-0 podman[259240]: 2026-02-19 20:50:25.808291355 +0000 UTC m=+0.056478405 container died c053a1325ad2e4193d5a2754f73a1af14d09ac02a19a79e8dc8fdabad7c22855 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-03b0387c-cb4d-416d-b212-4d980b66cbe2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 19 20:50:25 compute-0 nova_compute[188777]: 2026-02-19 20:50:25.811 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:25 compute-0 nova_compute[188777]: 2026-02-19 20:50:25.839 188781 INFO nova.virt.libvirt.driver [-] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Instance destroyed successfully.
Feb 19 20:50:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c053a1325ad2e4193d5a2754f73a1af14d09ac02a19a79e8dc8fdabad7c22855-userdata-shm.mount: Deactivated successfully.
Feb 19 20:50:25 compute-0 nova_compute[188777]: 2026-02-19 20:50:25.840 188781 DEBUG nova.objects.instance [None req-6a262b54-9b64-4ada-8f5a-124b21ec59ff 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lazy-loading 'resources' on Instance uuid c7d04a5a-1e2f-40c2-a686-18b23a5bddfa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:50:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cff5322526fa26e2f34632acf71ebbd870f506531aaceb13c1e168509e7174d-merged.mount: Deactivated successfully.
Feb 19 20:50:25 compute-0 nova_compute[188777]: 2026-02-19 20:50:25.853 188781 DEBUG nova.virt.libvirt.vif [None req-6a262b54-9b64-4ada-8f5a-124b21ec59ff 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-19T20:40:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-4749372-asg-gqiuwwiovj7t-a22kewlbuwbg-ig3ypn6zxo3u',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-4749372-asg-gqiuwwiovj7t-a22kewlbuwbg-ig3ypn6zxo3u',id=13,image_ref='e98a7b34-d7ef-4dcd-b1f3-0a369d480f18',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-19T20:40:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='08c5967c-a408-49e3-be73-425b7dd8ee8c'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3e54c3b3dadc42fca16da4cb7212a2db',ramdisk_id='',reservation_id='r-4fi1e04j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e98a7b34-d7ef-4dcd-b1f3-0a369d480f18',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-653304289',owner_user_name='tempest-PrometheusGabbiTest-653304289-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-19T20:40:32Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='4495bf20aedd42ff97fdae62ef729522',uuid=c7d04a5a-1e2f-40c2-a686-18b23a5bddfa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6730c115-fc6d-4fab-9c7d-1f6f4bd9e878", "address": "fa:16:3e:b9:4e:00", "network": {"id": "03b0387c-cb4d-416d-b212-4d980b66cbe2", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.124", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e54c3b3dadc42fca16da4cb7212a2db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6730c115-fc", "ovs_interfaceid": "6730c115-fc6d-4fab-9c7d-1f6f4bd9e878", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 19 20:50:25 compute-0 nova_compute[188777]: 2026-02-19 20:50:25.853 188781 DEBUG nova.network.os_vif_util [None req-6a262b54-9b64-4ada-8f5a-124b21ec59ff 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Converting VIF {"id": "6730c115-fc6d-4fab-9c7d-1f6f4bd9e878", "address": "fa:16:3e:b9:4e:00", "network": {"id": "03b0387c-cb4d-416d-b212-4d980b66cbe2", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.124", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e54c3b3dadc42fca16da4cb7212a2db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6730c115-fc", "ovs_interfaceid": "6730c115-fc6d-4fab-9c7d-1f6f4bd9e878", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 19 20:50:25 compute-0 nova_compute[188777]: 2026-02-19 20:50:25.854 188781 DEBUG nova.network.os_vif_util [None req-6a262b54-9b64-4ada-8f5a-124b21ec59ff 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b9:4e:00,bridge_name='br-int',has_traffic_filtering=True,id=6730c115-fc6d-4fab-9c7d-1f6f4bd9e878,network=Network(03b0387c-cb4d-416d-b212-4d980b66cbe2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6730c115-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 19 20:50:25 compute-0 nova_compute[188777]: 2026-02-19 20:50:25.854 188781 DEBUG os_vif [None req-6a262b54-9b64-4ada-8f5a-124b21ec59ff 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b9:4e:00,bridge_name='br-int',has_traffic_filtering=True,id=6730c115-fc6d-4fab-9c7d-1f6f4bd9e878,network=Network(03b0387c-cb4d-416d-b212-4d980b66cbe2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6730c115-fc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 19 20:50:25 compute-0 nova_compute[188777]: 2026-02-19 20:50:25.855 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:25 compute-0 podman[259240]: 2026-02-19 20:50:25.856103979 +0000 UTC m=+0.104291009 container cleanup c053a1325ad2e4193d5a2754f73a1af14d09ac02a19a79e8dc8fdabad7c22855 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-03b0387c-cb4d-416d-b212-4d980b66cbe2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 20:50:25 compute-0 nova_compute[188777]: 2026-02-19 20:50:25.856 188781 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6730c115-fc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:50:25 compute-0 nova_compute[188777]: 2026-02-19 20:50:25.858 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 19 20:50:25 compute-0 nova_compute[188777]: 2026-02-19 20:50:25.859 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:25 compute-0 nova_compute[188777]: 2026-02-19 20:50:25.861 188781 INFO os_vif [None req-6a262b54-9b64-4ada-8f5a-124b21ec59ff 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b9:4e:00,bridge_name='br-int',has_traffic_filtering=True,id=6730c115-fc6d-4fab-9c7d-1f6f4bd9e878,network=Network(03b0387c-cb4d-416d-b212-4d980b66cbe2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6730c115-fc')
Feb 19 20:50:25 compute-0 nova_compute[188777]: 2026-02-19 20:50:25.862 188781 INFO nova.virt.libvirt.driver [None req-6a262b54-9b64-4ada-8f5a-124b21ec59ff 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Deleting instance files /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa_del
Feb 19 20:50:25 compute-0 nova_compute[188777]: 2026-02-19 20:50:25.862 188781 INFO nova.virt.libvirt.driver [None req-6a262b54-9b64-4ada-8f5a-124b21ec59ff 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Deletion of /var/lib/nova/instances/c7d04a5a-1e2f-40c2-a686-18b23a5bddfa_del complete
Feb 19 20:50:25 compute-0 systemd[1]: libpod-conmon-c053a1325ad2e4193d5a2754f73a1af14d09ac02a19a79e8dc8fdabad7c22855.scope: Deactivated successfully.
Feb 19 20:50:25 compute-0 nova_compute[188777]: 2026-02-19 20:50:25.915 188781 INFO nova.compute.manager [None req-6a262b54-9b64-4ada-8f5a-124b21ec59ff 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Took 0.33 seconds to destroy the instance on the hypervisor.
Feb 19 20:50:25 compute-0 nova_compute[188777]: 2026-02-19 20:50:25.915 188781 DEBUG oslo.service.loopingcall [None req-6a262b54-9b64-4ada-8f5a-124b21ec59ff 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 19 20:50:25 compute-0 nova_compute[188777]: 2026-02-19 20:50:25.916 188781 DEBUG nova.compute.manager [-] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 19 20:50:25 compute-0 nova_compute[188777]: 2026-02-19 20:50:25.916 188781 DEBUG nova.network.neutron [-] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 19 20:50:25 compute-0 podman[259284]: 2026-02-19 20:50:25.92944135 +0000 UTC m=+0.052064798 container remove c053a1325ad2e4193d5a2754f73a1af14d09ac02a19a79e8dc8fdabad7c22855 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-03b0387c-cb4d-416d-b212-4d980b66cbe2, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Feb 19 20:50:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:50:25.938 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[a4c8fd15-9ab5-4b22-a3fb-e45d32641a48]: (4, ('Thu Feb 19 08:50:25 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-03b0387c-cb4d-416d-b212-4d980b66cbe2 (c053a1325ad2e4193d5a2754f73a1af14d09ac02a19a79e8dc8fdabad7c22855)\nc053a1325ad2e4193d5a2754f73a1af14d09ac02a19a79e8dc8fdabad7c22855\nThu Feb 19 08:50:25 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-03b0387c-cb4d-416d-b212-4d980b66cbe2 (c053a1325ad2e4193d5a2754f73a1af14d09ac02a19a79e8dc8fdabad7c22855)\nc053a1325ad2e4193d5a2754f73a1af14d09ac02a19a79e8dc8fdabad7c22855\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:50:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:50:25.944 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[036d8667-e390-4c26-9c2b-17a38eb6aadd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:50:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:50:25.949 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03b0387c-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:50:25 compute-0 nova_compute[188777]: 2026-02-19 20:50:25.953 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:25 compute-0 kernel: tap03b0387c-c0: left promiscuous mode
Feb 19 20:50:25 compute-0 nova_compute[188777]: 2026-02-19 20:50:25.955 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:50:25.959 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[76974748-782c-42db-b497-d8cc436d2fbf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:50:25 compute-0 nova_compute[188777]: 2026-02-19 20:50:25.964 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:50:25.976 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[853aedee-77bd-4ccf-830e-2b8e2bdaa45a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:50:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:50:25.978 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[346ed00d-2b7e-4203-8d20-09d0d26941a1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:50:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:50:25.989 242160 DEBUG oslo.privsep.daemon [-] privsep: reply[f8917ea5-7fa6-445b-bcb4-40e61242c2e2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493013, 'reachable_time': 35215, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259299, 'error': None, 'target': 'ovnmeta-03b0387c-cb4d-416d-b212-4d980b66cbe2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:50:25 compute-0 systemd[1]: run-netns-ovnmeta\x2d03b0387c\x2dcb4d\x2d416d\x2db212\x2d4d980b66cbe2.mount: Deactivated successfully.
Feb 19 20:50:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:50:25.992 108698 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-03b0387c-cb4d-416d-b212-4d980b66cbe2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 19 20:50:25 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:50:25.993 108698 DEBUG oslo.privsep.daemon [-] privsep: reply[2bf76cc2-dd37-44b3-8f61-14d86a17575e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 19 20:50:26 compute-0 nova_compute[188777]: 2026-02-19 20:50:26.213 188781 DEBUG nova.compute.manager [req-473b536f-21f6-40b0-bcfd-41a74dc679d0 req-930b03c2-3a36-4a1b-9d1e-af9d3211fae7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Received event network-vif-unplugged-6730c115-fc6d-4fab-9c7d-1f6f4bd9e878 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:50:26 compute-0 nova_compute[188777]: 2026-02-19 20:50:26.213 188781 DEBUG oslo_concurrency.lockutils [req-473b536f-21f6-40b0-bcfd-41a74dc679d0 req-930b03c2-3a36-4a1b-9d1e-af9d3211fae7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "c7d04a5a-1e2f-40c2-a686-18b23a5bddfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:50:26 compute-0 nova_compute[188777]: 2026-02-19 20:50:26.214 188781 DEBUG oslo_concurrency.lockutils [req-473b536f-21f6-40b0-bcfd-41a74dc679d0 req-930b03c2-3a36-4a1b-9d1e-af9d3211fae7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "c7d04a5a-1e2f-40c2-a686-18b23a5bddfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:50:26 compute-0 nova_compute[188777]: 2026-02-19 20:50:26.217 188781 DEBUG oslo_concurrency.lockutils [req-473b536f-21f6-40b0-bcfd-41a74dc679d0 req-930b03c2-3a36-4a1b-9d1e-af9d3211fae7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "c7d04a5a-1e2f-40c2-a686-18b23a5bddfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:50:26 compute-0 nova_compute[188777]: 2026-02-19 20:50:26.217 188781 DEBUG nova.compute.manager [req-473b536f-21f6-40b0-bcfd-41a74dc679d0 req-930b03c2-3a36-4a1b-9d1e-af9d3211fae7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] No waiting events found dispatching network-vif-unplugged-6730c115-fc6d-4fab-9c7d-1f6f4bd9e878 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 19 20:50:26 compute-0 nova_compute[188777]: 2026-02-19 20:50:26.218 188781 DEBUG nova.compute.manager [req-473b536f-21f6-40b0-bcfd-41a74dc679d0 req-930b03c2-3a36-4a1b-9d1e-af9d3211fae7 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Received event network-vif-unplugged-6730c115-fc6d-4fab-9c7d-1f6f4bd9e878 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 19 20:50:27 compute-0 nova_compute[188777]: 2026-02-19 20:50:27.089 188781 DEBUG nova.network.neutron [-] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:50:27 compute-0 nova_compute[188777]: 2026-02-19 20:50:27.118 188781 INFO nova.compute.manager [-] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Took 1.20 seconds to deallocate network for instance.
Feb 19 20:50:27 compute-0 nova_compute[188777]: 2026-02-19 20:50:27.158 188781 DEBUG nova.compute.manager [req-efa84134-2ac5-4a20-8bdc-89ad0b19ab04 req-8fd5c196-c21a-4c29-9e19-b578afc9a6c6 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Received event network-vif-deleted-6730c115-fc6d-4fab-9c7d-1f6f4bd9e878 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:50:27 compute-0 nova_compute[188777]: 2026-02-19 20:50:27.160 188781 DEBUG oslo_concurrency.lockutils [None req-6a262b54-9b64-4ada-8f5a-124b21ec59ff 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:50:27 compute-0 nova_compute[188777]: 2026-02-19 20:50:27.160 188781 DEBUG oslo_concurrency.lockutils [None req-6a262b54-9b64-4ada-8f5a-124b21ec59ff 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:50:27 compute-0 nova_compute[188777]: 2026-02-19 20:50:27.241 188781 DEBUG nova.compute.provider_tree [None req-6a262b54-9b64-4ada-8f5a-124b21ec59ff 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:50:27 compute-0 nova_compute[188777]: 2026-02-19 20:50:27.254 188781 DEBUG nova.scheduler.client.report [None req-6a262b54-9b64-4ada-8f5a-124b21ec59ff 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:50:27 compute-0 nova_compute[188777]: 2026-02-19 20:50:27.276 188781 DEBUG oslo_concurrency.lockutils [None req-6a262b54-9b64-4ada-8f5a-124b21ec59ff 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.115s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:50:27 compute-0 nova_compute[188777]: 2026-02-19 20:50:27.301 188781 INFO nova.scheduler.client.report [None req-6a262b54-9b64-4ada-8f5a-124b21ec59ff 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Deleted allocations for instance c7d04a5a-1e2f-40c2-a686-18b23a5bddfa
Feb 19 20:50:27 compute-0 nova_compute[188777]: 2026-02-19 20:50:27.360 188781 DEBUG oslo_concurrency.lockutils [None req-6a262b54-9b64-4ada-8f5a-124b21ec59ff 4495bf20aedd42ff97fdae62ef729522 3e54c3b3dadc42fca16da4cb7212a2db - - default default] Lock "c7d04a5a-1e2f-40c2-a686-18b23a5bddfa" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:50:28 compute-0 nova_compute[188777]: 2026-02-19 20:50:28.293 188781 DEBUG nova.compute.manager [req-9ed18b97-2d82-4c03-bdbc-710c28435d3b req-e6789b2e-3691-4f0d-9332-01a3bbd6d226 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Received event network-vif-plugged-6730c115-fc6d-4fab-9c7d-1f6f4bd9e878 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 19 20:50:28 compute-0 nova_compute[188777]: 2026-02-19 20:50:28.294 188781 DEBUG oslo_concurrency.lockutils [req-9ed18b97-2d82-4c03-bdbc-710c28435d3b req-e6789b2e-3691-4f0d-9332-01a3bbd6d226 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Acquiring lock "c7d04a5a-1e2f-40c2-a686-18b23a5bddfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:50:28 compute-0 nova_compute[188777]: 2026-02-19 20:50:28.294 188781 DEBUG oslo_concurrency.lockutils [req-9ed18b97-2d82-4c03-bdbc-710c28435d3b req-e6789b2e-3691-4f0d-9332-01a3bbd6d226 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "c7d04a5a-1e2f-40c2-a686-18b23a5bddfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:50:28 compute-0 nova_compute[188777]: 2026-02-19 20:50:28.295 188781 DEBUG oslo_concurrency.lockutils [req-9ed18b97-2d82-4c03-bdbc-710c28435d3b req-e6789b2e-3691-4f0d-9332-01a3bbd6d226 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] Lock "c7d04a5a-1e2f-40c2-a686-18b23a5bddfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:50:28 compute-0 nova_compute[188777]: 2026-02-19 20:50:28.295 188781 DEBUG nova.compute.manager [req-9ed18b97-2d82-4c03-bdbc-710c28435d3b req-e6789b2e-3691-4f0d-9332-01a3bbd6d226 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] No waiting events found dispatching network-vif-plugged-6730c115-fc6d-4fab-9c7d-1f6f4bd9e878 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 19 20:50:28 compute-0 nova_compute[188777]: 2026-02-19 20:50:28.295 188781 WARNING nova.compute.manager [req-9ed18b97-2d82-4c03-bdbc-710c28435d3b req-e6789b2e-3691-4f0d-9332-01a3bbd6d226 54b3392deec747dbacad3be8ff78a8eb e01a26001523409a81091540e13a966d - - default default] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Received unexpected event network-vif-plugged-6730c115-fc6d-4fab-9c7d-1f6f4bd9e878 for instance with vm_state deleted and task_state None.
Feb 19 20:50:28 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:50:28.497 108175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e2fe6bb6-fad0-4563-8388-215a30f03e3f, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 19 20:50:29 compute-0 nova_compute[188777]: 2026-02-19 20:50:29.207 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:29 compute-0 podman[259301]: 2026-02-19 20:50:29.381801528 +0000 UTC m=+0.063790414 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 19 20:50:29 compute-0 podman[204724]: time="2026-02-19T20:50:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:50:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:50:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30472 "" "Go-http-client/1.1"
Feb 19 20:50:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:50:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4847 "" "Go-http-client/1.1"
Feb 19 20:50:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:50:30.471 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:50:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:50:30.471 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:50:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:50:30.472 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:50:30 compute-0 nova_compute[188777]: 2026-02-19 20:50:30.859 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:31 compute-0 openstack_network_exporter[207898]: ERROR   20:50:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:50:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:50:31 compute-0 openstack_network_exporter[207898]: ERROR   20:50:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:50:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:50:31 compute-0 podman[259325]: 2026-02-19 20:50:31.426610696 +0000 UTC m=+0.110846103 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260216, container_name=ceilometer_agent_compute)
Feb 19 20:50:33 compute-0 nova_compute[188777]: 2026-02-19 20:50:33.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:50:33 compute-0 podman[259345]: 2026-02-19 20:50:33.455539918 +0000 UTC m=+0.131157559 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 19 20:50:34 compute-0 nova_compute[188777]: 2026-02-19 20:50:34.185 188781 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1771534219.183913, 1b6b1397-fda7-4470-883b-1cc5974fac84 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:50:34 compute-0 nova_compute[188777]: 2026-02-19 20:50:34.185 188781 INFO nova.compute.manager [-] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] VM Stopped (Lifecycle Event)
Feb 19 20:50:34 compute-0 nova_compute[188777]: 2026-02-19 20:50:34.208 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:34 compute-0 nova_compute[188777]: 2026-02-19 20:50:34.213 188781 DEBUG nova.compute.manager [None req-986b9fdf-4ad4-4b01-9296-682b7d250123 - - - - - -] [instance: 1b6b1397-fda7-4470-883b-1cc5974fac84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:50:35 compute-0 nova_compute[188777]: 2026-02-19 20:50:35.276 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:50:35 compute-0 nova_compute[188777]: 2026-02-19 20:50:35.277 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 19 20:50:35 compute-0 nova_compute[188777]: 2026-02-19 20:50:35.294 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 19 20:50:35 compute-0 nova_compute[188777]: 2026-02-19 20:50:35.863 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:39 compute-0 nova_compute[188777]: 2026-02-19 20:50:39.211 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:40 compute-0 nova_compute[188777]: 2026-02-19 20:50:40.837 188781 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1771534225.8355737, c7d04a5a-1e2f-40c2-a686-18b23a5bddfa => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 19 20:50:40 compute-0 nova_compute[188777]: 2026-02-19 20:50:40.837 188781 INFO nova.compute.manager [-] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] VM Stopped (Lifecycle Event)
Feb 19 20:50:40 compute-0 nova_compute[188777]: 2026-02-19 20:50:40.867 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:40 compute-0 nova_compute[188777]: 2026-02-19 20:50:40.888 188781 DEBUG nova.compute.manager [None req-6dd321b0-0350-4ce8-8bd2-ce2326dd0cca - - - - - -] [instance: c7d04a5a-1e2f-40c2-a686-18b23a5bddfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 19 20:50:41 compute-0 ovn_controller[98843]: 2026-02-19T20:50:41Z|00155|binding|INFO|Releasing lport 55b38ec7-c28d-4985-87ac-ac8d24f4e97c from this chassis (sb_readonly=0)
Feb 19 20:50:41 compute-0 ovn_controller[98843]: 2026-02-19T20:50:41Z|00156|binding|INFO|Releasing lport a514a3b0-3622-43cb-93f5-1ce2f2eacb84 from this chassis (sb_readonly=0)
Feb 19 20:50:41 compute-0 nova_compute[188777]: 2026-02-19 20:50:41.240 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:43 compute-0 ovn_controller[98843]: 2026-02-19T20:50:43Z|00157|binding|INFO|Releasing lport 55b38ec7-c28d-4985-87ac-ac8d24f4e97c from this chassis (sb_readonly=0)
Feb 19 20:50:43 compute-0 ovn_controller[98843]: 2026-02-19T20:50:43Z|00158|binding|INFO|Releasing lport a514a3b0-3622-43cb-93f5-1ce2f2eacb84 from this chassis (sb_readonly=0)
Feb 19 20:50:43 compute-0 nova_compute[188777]: 2026-02-19 20:50:43.802 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:44 compute-0 nova_compute[188777]: 2026-02-19 20:50:44.212 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:45 compute-0 nova_compute[188777]: 2026-02-19 20:50:45.870 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:46 compute-0 ovn_controller[98843]: 2026-02-19T20:50:46Z|00159|binding|INFO|Releasing lport 55b38ec7-c28d-4985-87ac-ac8d24f4e97c from this chassis (sb_readonly=0)
Feb 19 20:50:46 compute-0 ovn_controller[98843]: 2026-02-19T20:50:46Z|00160|binding|INFO|Releasing lport a514a3b0-3622-43cb-93f5-1ce2f2eacb84 from this chassis (sb_readonly=0)
Feb 19 20:50:46 compute-0 nova_compute[188777]: 2026-02-19 20:50:46.865 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:48 compute-0 podman[259372]: 2026-02-19 20:50:48.404012288 +0000 UTC m=+0.083289253 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 19 20:50:48 compute-0 podman[259371]: 2026-02-19 20:50:48.403971756 +0000 UTC m=+0.089022021 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.7, io.openshift.expose-services=, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=ubi9/ubi-minimal, architecture=x86_64, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c)
Feb 19 20:50:49 compute-0 nova_compute[188777]: 2026-02-19 20:50:49.214 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:49 compute-0 podman[259412]: 2026-02-19 20:50:49.393845149 +0000 UTC m=+0.084387527 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 20:50:50 compute-0 nova_compute[188777]: 2026-02-19 20:50:50.875 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:54 compute-0 nova_compute[188777]: 2026-02-19 20:50:54.217 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:55 compute-0 nova_compute[188777]: 2026-02-19 20:50:55.880 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:56 compute-0 podman[259433]: 2026-02-19 20:50:56.371937668 +0000 UTC m=+0.057543349 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 19 20:50:56 compute-0 podman[259432]: 2026-02-19 20:50:56.398351703 +0000 UTC m=+0.085889404 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=kepler, distribution-scope=public, vcs-type=git, build-date=2024-09-18T21:23:30, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release=1214.1726694543, io.buildah.version=1.29.0, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, name=ubi9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, container_name=kepler)
Feb 19 20:50:59 compute-0 nova_compute[188777]: 2026-02-19 20:50:59.221 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:50:59 compute-0 podman[204724]: time="2026-02-19T20:50:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:50:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:50:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30472 "" "Go-http-client/1.1"
Feb 19 20:50:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:50:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4845 "" "Go-http-client/1.1"
Feb 19 20:51:00 compute-0 podman[259470]: 2026-02-19 20:51:00.364335238 +0000 UTC m=+0.051746678 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 19 20:51:00 compute-0 nova_compute[188777]: 2026-02-19 20:51:00.884 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:51:01 compute-0 openstack_network_exporter[207898]: ERROR   20:51:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:51:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:51:01 compute-0 openstack_network_exporter[207898]: ERROR   20:51:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:51:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:51:02 compute-0 podman[259494]: 2026-02-19 20:51:02.415492044 +0000 UTC m=+0.103657149 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0)
Feb 19 20:51:03 compute-0 nova_compute[188777]: 2026-02-19 20:51:03.282 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:51:04 compute-0 nova_compute[188777]: 2026-02-19 20:51:04.222 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:51:04 compute-0 podman[259514]: 2026-02-19 20:51:04.420502019 +0000 UTC m=+0.112632200 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Feb 19 20:51:05 compute-0 nova_compute[188777]: 2026-02-19 20:51:05.886 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:51:06 compute-0 nova_compute[188777]: 2026-02-19 20:51:06.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:51:06 compute-0 nova_compute[188777]: 2026-02-19 20:51:06.265 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:51:06 compute-0 nova_compute[188777]: 2026-02-19 20:51:06.308 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 19 20:51:06 compute-0 nova_compute[188777]: 2026-02-19 20:51:06.309 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:51:08 compute-0 nova_compute[188777]: 2026-02-19 20:51:08.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:51:08 compute-0 nova_compute[188777]: 2026-02-19 20:51:08.265 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:51:09 compute-0 nova_compute[188777]: 2026-02-19 20:51:09.224 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:51:09 compute-0 nova_compute[188777]: 2026-02-19 20:51:09.261 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:51:10 compute-0 nova_compute[188777]: 2026-02-19 20:51:10.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:51:10 compute-0 nova_compute[188777]: 2026-02-19 20:51:10.291 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:51:10 compute-0 nova_compute[188777]: 2026-02-19 20:51:10.291 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:51:10 compute-0 nova_compute[188777]: 2026-02-19 20:51:10.292 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:51:10 compute-0 nova_compute[188777]: 2026-02-19 20:51:10.292 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:51:10 compute-0 nova_compute[188777]: 2026-02-19 20:51:10.375 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:51:10 compute-0 nova_compute[188777]: 2026-02-19 20:51:10.441 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:51:10 compute-0 nova_compute[188777]: 2026-02-19 20:51:10.442 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:51:10 compute-0 nova_compute[188777]: 2026-02-19 20:51:10.500 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:51:10 compute-0 nova_compute[188777]: 2026-02-19 20:51:10.513 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:51:10 compute-0 nova_compute[188777]: 2026-02-19 20:51:10.571 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:51:10 compute-0 nova_compute[188777]: 2026-02-19 20:51:10.573 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:51:10 compute-0 nova_compute[188777]: 2026-02-19 20:51:10.635 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:51:10 compute-0 nova_compute[188777]: 2026-02-19 20:51:10.889 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:51:10 compute-0 nova_compute[188777]: 2026-02-19 20:51:10.970 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:51:10 compute-0 nova_compute[188777]: 2026-02-19 20:51:10.971 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4858MB free_disk=72.11279296875GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:51:10 compute-0 nova_compute[188777]: 2026-02-19 20:51:10.972 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:51:10 compute-0 nova_compute[188777]: 2026-02-19 20:51:10.972 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:51:11 compute-0 nova_compute[188777]: 2026-02-19 20:51:11.058 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 997ebdcf-7eab-485b-8fbf-d21112c78946 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:51:11 compute-0 nova_compute[188777]: 2026-02-19 20:51:11.059 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance dff9d513-54f8-4d73-acf7-df610dc4d064 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:51:11 compute-0 nova_compute[188777]: 2026-02-19 20:51:11.059 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:51:11 compute-0 nova_compute[188777]: 2026-02-19 20:51:11.059 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:51:11 compute-0 nova_compute[188777]: 2026-02-19 20:51:11.297 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:51:11 compute-0 nova_compute[188777]: 2026-02-19 20:51:11.309 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:51:11 compute-0 nova_compute[188777]: 2026-02-19 20:51:11.324 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:51:11 compute-0 nova_compute[188777]: 2026-02-19 20:51:11.325 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.353s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:51:14 compute-0 nova_compute[188777]: 2026-02-19 20:51:14.226 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:51:14 compute-0 nova_compute[188777]: 2026-02-19 20:51:14.325 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.156 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.157 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.157 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.158 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fa4f6728800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.159 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.160 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.160 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.160 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.161 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.161 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.162 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.163 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.163 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.163 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a390>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.164 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.165 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.166 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.166 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.166 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.167 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.168 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.168 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.168 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.169 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.169 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.169 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.170 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.170 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.170 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa4f683ec60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.173 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '997ebdcf-7eab-485b-8fbf-d21112c78946', 'name': 'tempest-AttachInterfacesUnderV243Test-server-684728485', 'flavor': {'id': '68c4e072-7c2b-48a1-8e07-0fd69e153270', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '17b9bce8-a91b-495d-ac33-cf63893413f9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000009', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '54ce0de2bf12421a9458013ccaa2dcad', 'user_id': '90c9e30d17534357bece36d1acaab39c', 'hostId': 'f46cf9989db3abf7517c94fba8fc996a8b55c81d8ccd61b23f3020bd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.179 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'dff9d513-54f8-4d73-acf7-df610dc4d064', 'name': 'tempest-TestNetworkBasicOps-server-215985627', 'flavor': {'id': '68c4e072-7c2b-48a1-8e07-0fd69e153270', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '17b9bce8-a91b-495d-ac33-cf63893413f9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'eb9e3732b9f4456d9f90bf3e156f6f7c', 'user_id': 'ef20d0162e404953a8f45beac9fadf18', 'hostId': 'f5b284f60221ec4908d310f9d0c4e0647a5dcc4e862839352782ffc8', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.180 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.180 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.181 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.181 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.182 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-19T20:51:15.181747) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.186 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.190 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.191 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.191 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fa4f672a480>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.191 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.191 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fa4f672a180>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.192 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.192 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.192 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.192 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.193 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.outgoing.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.193 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-19T20:51:15.192686) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.193 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.outgoing.packets volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.194 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.194 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fa4f672bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.194 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.195 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.195 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a240>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.195 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.195 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.195 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-19T20:51:15.195460) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.196 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.196 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.196 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fa4f672a270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.197 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.197 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.197 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a2a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.197 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.198 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.outgoing.bytes volume: 3390 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.198 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-19T20:51:15.197715) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.198 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.outgoing.bytes volume: 15886 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.198 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.199 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fa4f6728ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.199 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.199 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.199 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.200 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.200 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-19T20:51:15.200101) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.219 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.236 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.237 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.237 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fa4f672a300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.237 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.237 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.238 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a330>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.238 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.238 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.238 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-19T20:51:15.238352) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.239 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.239 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.240 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fa4f672ab70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.240 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.240 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.240 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.240 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.241 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-19T20:51:15.240934) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.253 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.253 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 nova_compute[188777]: 2026-02-19 20:51:15.260 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.265 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.266 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.266 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.266 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fa4f6728290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.267 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.267 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.267 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.267 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.268 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-19T20:51:15.267706) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.301 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.bytes volume: 30759424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.301 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.338 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.bytes volume: 30591488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.338 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.bytes volume: 274750 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.338 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.339 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fa4f69216a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.339 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.339 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.339 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f83ffb90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.339 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.339 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/cpu volume: 42010000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.339 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/cpu volume: 41820000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.340 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.340 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fa4f67286b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.340 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.340 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-19T20:51:15.339530) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.340 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fa4f67283b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.340 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.340 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.341 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67283e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.341 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.341 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-19T20:51:15.341099) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.341 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.latency volume: 893810108 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.341 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.latency volume: 72441655 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.342 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.latency volume: 1154094577 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.342 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.latency volume: 68730024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.342 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.342 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fa4f672a120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.342 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.343 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.343 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a3f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.343 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.343 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.343 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.343 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.343 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fa4f672a1b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.344 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.344 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.344 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672a420>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.344 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-19T20:51:15.343136) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.344 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.344 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.344 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.344 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.345 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fa4f6728410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.345 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.345 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.345 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-19T20:51:15.344384) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.345 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.345 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.345 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.requests volume: 1111 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.345 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.345 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-19T20:51:15.345523) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.346 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.requests volume: 1098 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.346 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.read.requests volume: 108 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.346 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.346 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fa4f672a150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.346 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.346 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.346 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6921460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.346 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.346 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.347 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.incoming.packets volume: 115 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.347 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-19T20:51:15.346898) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.347 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.347 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fa4f6728470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.347 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.347 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.347 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67284a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.347 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.348 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.348 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.348 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-19T20:51:15.347945) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.348 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.348 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.349 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.349 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fa4f68f6030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.349 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.349 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.349 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67284d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.349 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.349 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.bytes volume: 73101312 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.349 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-19T20:51:15.349459) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.349 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.349 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.bytes volume: 73109504 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.350 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.350 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.350 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fa4f672ab10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.350 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.350 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.350 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.350 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.350 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.351 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-19T20:51:15.350812) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.351 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.351 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.351 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.351 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.351 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fa4f6728500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.352 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.352 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.352 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.352 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.352 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.latency volume: 3085349853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.352 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-19T20:51:15.352220) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.352 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.352 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.latency volume: 15219964748 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.352 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.353 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.353 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fa4f672a0c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.353 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.353 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.353 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6729d60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.353 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.353 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.354 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.354 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.354 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fa4f6728560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.354 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-19T20:51:15.353715) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.354 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.354 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.354 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.354 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.354 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.requests volume: 328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.355 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.355 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.requests volume: 306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.355 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-19T20:51:15.354804) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.355 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.355 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.355 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fa4f67285c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.356 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.356 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.356 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f67285f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.356 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.356 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.356 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fa4f6728620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.356 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.356 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-19T20:51:15.356297) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.357 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.357 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f6728650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.357 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.357 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.357 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fa4f672be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.357 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.357 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-19T20:51:15.357120) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.357 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.357 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.358 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.358 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-19T20:51:15.358008) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.358 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/memory.usage volume: 42.71875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.358 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/memory.usage volume: 42.80859375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.358 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.358 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fa4f672be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa4f66d8230>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.358 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.358 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.359 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fa4f672bec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.359 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.359 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-19T20:51:15.359050) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.359 15 DEBUG ceilometer.compute.pollsters [-] 997ebdcf-7eab-485b-8fbf-d21112c78946/network.incoming.bytes volume: 4343 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.359 15 DEBUG ceilometer.compute.pollsters [-] dff9d513-54f8-4d73-acf7-df610dc4d064/network.incoming.bytes volume: 20170 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.359 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.360 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.360 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.360 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.360 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.360 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.360 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.360 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.360 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.361 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.361 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.361 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.361 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.361 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.361 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.361 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.361 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.361 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.361 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.361 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.361 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.361 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.362 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.362 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.362 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.362 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:51:15 compute-0 ceilometer_agent_compute[198530]: 2026-02-19 20:51:15.362 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 19 20:51:15 compute-0 nova_compute[188777]: 2026-02-19 20:51:15.894 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:51:16 compute-0 nova_compute[188777]: 2026-02-19 20:51:16.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:51:17 compute-0 nova_compute[188777]: 2026-02-19 20:51:17.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:51:17 compute-0 ovn_controller[98843]: 2026-02-19T20:51:17Z|00161|memory_trim|INFO|Detected inactivity (last active 30020 ms ago): trimming memory
Feb 19 20:51:19 compute-0 nova_compute[188777]: 2026-02-19 20:51:19.228 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:51:19 compute-0 podman[259556]: 2026-02-19 20:51:19.386541153 +0000 UTC m=+0.061284485 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 19 20:51:19 compute-0 podman[259555]: 2026-02-19 20:51:19.415718735 +0000 UTC m=+0.094587987 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, version=9.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, distribution-scope=public, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2026-02-05T04:57:10Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.33.7)
Feb 19 20:51:19 compute-0 podman[259599]: 2026-02-19 20:51:19.528753965 +0000 UTC m=+0.075766368 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 19 20:51:20 compute-0 nova_compute[188777]: 2026-02-19 20:51:20.899 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:51:24 compute-0 nova_compute[188777]: 2026-02-19 20:51:24.230 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:51:25 compute-0 nova_compute[188777]: 2026-02-19 20:51:25.904 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:51:27 compute-0 podman[259618]: 2026-02-19 20:51:27.373281032 +0000 UTC m=+0.061807392 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.buildah.version=1.29.0, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, config_id=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, distribution-scope=public, managed_by=edpm_ansible, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, version=9.4, release=1214.1726694543)
Feb 19 20:51:27 compute-0 podman[259619]: 2026-02-19 20:51:27.383904583 +0000 UTC m=+0.067878030 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 19 20:51:29 compute-0 nova_compute[188777]: 2026-02-19 20:51:29.233 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:51:29 compute-0 podman[204724]: time="2026-02-19T20:51:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:51:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:51:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30472 "" "Go-http-client/1.1"
Feb 19 20:51:29 compute-0 podman[204724]: @ - - [19/Feb/2026:20:51:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4852 "" "Go-http-client/1.1"
Feb 19 20:51:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:51:30.473 108175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:51:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:51:30.473 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:51:30 compute-0 ovn_metadata_agent[108170]: 2026-02-19 20:51:30.474 108175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:51:30 compute-0 nova_compute[188777]: 2026-02-19 20:51:30.908 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:51:31 compute-0 openstack_network_exporter[207898]: ERROR   20:51:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:51:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:51:31 compute-0 openstack_network_exporter[207898]: ERROR   20:51:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:51:31 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:51:31 compute-0 podman[259656]: 2026-02-19 20:51:31.415338932 +0000 UTC m=+0.095217676 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 19 20:51:33 compute-0 podman[259679]: 2026-02-19 20:51:33.411297743 +0000 UTC m=+0.082295381 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Feb 19 20:51:34 compute-0 nova_compute[188777]: 2026-02-19 20:51:34.236 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:51:35 compute-0 podman[259698]: 2026-02-19 20:51:35.472619107 +0000 UTC m=+0.139761867 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Feb 19 20:51:35 compute-0 nova_compute[188777]: 2026-02-19 20:51:35.911 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:51:39 compute-0 nova_compute[188777]: 2026-02-19 20:51:39.238 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:51:40 compute-0 nova_compute[188777]: 2026-02-19 20:51:40.916 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:51:44 compute-0 nova_compute[188777]: 2026-02-19 20:51:44.239 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:51:45 compute-0 nova_compute[188777]: 2026-02-19 20:51:45.920 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:51:49 compute-0 nova_compute[188777]: 2026-02-19 20:51:49.241 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:51:50 compute-0 podman[259727]: 2026-02-19 20:51:50.384558487 +0000 UTC m=+0.065806897 container health_status 59752aa8c455bc1dad12c4255ec678df77e817cb47c1d6e70b6896845a95af5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb 19 20:51:50 compute-0 podman[259728]: 2026-02-19 20:51:50.39040389 +0000 UTC m=+0.063339450 container health_status fa1efb7456e17541596c3e88618464fbf98e2647108ba8b9611a9e0fce2904ad (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 19 20:51:50 compute-0 podman[259726]: 2026-02-19 20:51:50.41028804 +0000 UTC m=+0.090507808 container health_status 3b13f03f41c1b84d63d0d21377b1219686db2fe85902ddcf3137100689310692 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347, vendor=Red Hat, Inc., config_id=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, build-date=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, container_name=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible)
Feb 19 20:51:50 compute-0 nova_compute[188777]: 2026-02-19 20:51:50.926 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:51:54 compute-0 nova_compute[188777]: 2026-02-19 20:51:54.244 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:51:55 compute-0 nova_compute[188777]: 2026-02-19 20:51:55.929 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:51:58 compute-0 podman[259785]: 2026-02-19 20:51:58.390105382 +0000 UTC m=+0.066448407 container health_status ed1ae3eb575cb7f289cc2d267e5826af41630789bbd4821fb02dfbc1b56e662e (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 19 20:51:58 compute-0 podman[259784]: 2026-02-19 20:51:58.423563117 +0000 UTC m=+0.103604348 container health_status 9fd1661cb3b6c8baaf034b7337cc05b859a3e0ebc04f97df76cf1d83336dbbce (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, managed_by=edpm_ansible, container_name=kepler, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.4, config_id=kepler, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, name=ubi9, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9)
Feb 19 20:51:59 compute-0 nova_compute[188777]: 2026-02-19 20:51:59.247 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:51:59 compute-0 podman[204724]: time="2026-02-19T20:51:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 19 20:51:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:51:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30472 "" "Go-http-client/1.1"
Feb 19 20:51:59 compute-0 podman[204724]: @ - - [19/Feb/2026:20:51:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4846 "" "Go-http-client/1.1"
Feb 19 20:52:00 compute-0 nova_compute[188777]: 2026-02-19 20:52:00.933 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:52:01 compute-0 openstack_network_exporter[207898]: ERROR   20:52:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 19 20:52:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:52:01 compute-0 openstack_network_exporter[207898]: ERROR   20:52:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 19 20:52:01 compute-0 openstack_network_exporter[207898]: 
Feb 19 20:52:02 compute-0 podman[259822]: 2026-02-19 20:52:02.399817182 +0000 UTC m=+0.081186457 container health_status 9e54581c620c99708e6081949402bd1728a957422262b3dcff5893a762acadc2 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 19 20:52:02 compute-0 sshd-session[259845]: Accepted publickey for zuul from 192.168.122.10 port 49436 ssh2: ECDSA SHA256:U7+XUhHIIKxaxeCtrtx4n7poU9CMVA2TmDaaiHbw4x0
Feb 19 20:52:02 compute-0 systemd-logind[810]: New session 31 of user zuul.
Feb 19 20:52:02 compute-0 systemd[1]: Started Session 31 of User zuul.
Feb 19 20:52:02 compute-0 sshd-session[259845]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 19 20:52:02 compute-0 sudo[259849]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Feb 19 20:52:02 compute-0 sudo[259849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 19 20:52:03 compute-0 nova_compute[188777]: 2026-02-19 20:52:03.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:52:03 compute-0 podman[259883]: 2026-02-19 20:52:03.667760801 +0000 UTC m=+0.074984023 container health_status 7861cce14a15c55f90a42c8c9a944db723d3f1db5be6c9c2d5060eb08182187a (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=5a9d1bc4c8b8cce85e210fe405122fb0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute)
Feb 19 20:52:04 compute-0 nova_compute[188777]: 2026-02-19 20:52:04.250 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:52:05 compute-0 nova_compute[188777]: 2026-02-19 20:52:05.938 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:52:06 compute-0 nova_compute[188777]: 2026-02-19 20:52:06.265 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:52:06 compute-0 podman[260029]: 2026-02-19 20:52:06.436353049 +0000 UTC m=+0.122647952 container health_status 626cf262745349c8a45276678390772ebfb04c1b719845050900a81dbbc242c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '65cac4588f43068a161a9d72381a59490e60abeb65bf2e4b7286a447ea673872-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2-86908679b1c003534bbf267bb391bc0635e769dabc16eb125faaf3e9bd1c4bc2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Feb 19 20:52:07 compute-0 nova_compute[188777]: 2026-02-19 20:52:07.264 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:52:07 compute-0 nova_compute[188777]: 2026-02-19 20:52:07.264 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 19 20:52:07 compute-0 nova_compute[188777]: 2026-02-19 20:52:07.264 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 19 20:52:07 compute-0 nova_compute[188777]: 2026-02-19 20:52:07.949 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "refresh_cache-997ebdcf-7eab-485b-8fbf-d21112c78946" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 19 20:52:07 compute-0 nova_compute[188777]: 2026-02-19 20:52:07.950 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquired lock "refresh_cache-997ebdcf-7eab-485b-8fbf-d21112c78946" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 19 20:52:07 compute-0 nova_compute[188777]: 2026-02-19 20:52:07.950 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 19 20:52:07 compute-0 nova_compute[188777]: 2026-02-19 20:52:07.950 188781 DEBUG nova.objects.instance [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 997ebdcf-7eab-485b-8fbf-d21112c78946 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 19 20:52:09 compute-0 nova_compute[188777]: 2026-02-19 20:52:09.253 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:52:09 compute-0 ovs-vsctl[260110]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Feb 19 20:52:09 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 259873 (sos)
Feb 19 20:52:09 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Feb 19 20:52:09 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Feb 19 20:52:10 compute-0 virtqemud[188195]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Feb 19 20:52:10 compute-0 virtqemud[188195]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Feb 19 20:52:10 compute-0 virtqemud[188195]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Feb 19 20:52:10 compute-0 nova_compute[188777]: 2026-02-19 20:52:10.942 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:52:11 compute-0 nova_compute[188777]: 2026-02-19 20:52:11.232 188781 DEBUG nova.network.neutron [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Updating instance_info_cache with network_info: [{"id": "44b4451c-db39-42a3-a2c6-5c8c42d1669b", "address": "fa:16:3e:f7:60:ee", "network": {"id": "ef3fe901-c03c-42fd-97b9-c1f0218f248b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-572210270-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54ce0de2bf12421a9458013ccaa2dcad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44b4451c-db", "ovs_interfaceid": "44b4451c-db39-42a3-a2c6-5c8c42d1669b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 19 20:52:11 compute-0 nova_compute[188777]: 2026-02-19 20:52:11.260 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Releasing lock "refresh_cache-997ebdcf-7eab-485b-8fbf-d21112c78946" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 19 20:52:11 compute-0 nova_compute[188777]: 2026-02-19 20:52:11.260 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] [instance: 997ebdcf-7eab-485b-8fbf-d21112c78946] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 19 20:52:11 compute-0 nova_compute[188777]: 2026-02-19 20:52:11.261 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:52:11 compute-0 nova_compute[188777]: 2026-02-19 20:52:11.261 188781 DEBUG nova.compute.manager [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 19 20:52:11 compute-0 nova_compute[188777]: 2026-02-19 20:52:11.262 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:52:11 compute-0 nova_compute[188777]: 2026-02-19 20:52:11.289 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:52:11 compute-0 nova_compute[188777]: 2026-02-19 20:52:11.290 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:52:11 compute-0 nova_compute[188777]: 2026-02-19 20:52:11.290 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:52:11 compute-0 nova_compute[188777]: 2026-02-19 20:52:11.290 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 19 20:52:11 compute-0 nova_compute[188777]: 2026-02-19 20:52:11.375 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:52:11 compute-0 nova_compute[188777]: 2026-02-19 20:52:11.430 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:52:11 compute-0 nova_compute[188777]: 2026-02-19 20:52:11.431 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:52:11 compute-0 nova_compute[188777]: 2026-02-19 20:52:11.484 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/997ebdcf-7eab-485b-8fbf-d21112c78946/disk --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:52:11 compute-0 nova_compute[188777]: 2026-02-19 20:52:11.499 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:52:11 compute-0 nova_compute[188777]: 2026-02-19 20:52:11.558 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:52:11 compute-0 nova_compute[188777]: 2026-02-19 20:52:11.559 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 19 20:52:11 compute-0 crontab[260538]: (root) LIST (root)
Feb 19 20:52:11 compute-0 nova_compute[188777]: 2026-02-19 20:52:11.609 188781 DEBUG oslo_concurrency.processutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dff9d513-54f8-4d73-acf7-df610dc4d064/disk --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 19 20:52:11 compute-0 nova_compute[188777]: 2026-02-19 20:52:11.913 188781 WARNING nova.virt.libvirt.driver [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 19 20:52:11 compute-0 nova_compute[188777]: 2026-02-19 20:52:11.914 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4711MB free_disk=72.06846618652344GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 19 20:52:11 compute-0 nova_compute[188777]: 2026-02-19 20:52:11.914 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 19 20:52:11 compute-0 nova_compute[188777]: 2026-02-19 20:52:11.914 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 19 20:52:12 compute-0 nova_compute[188777]: 2026-02-19 20:52:12.012 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance 997ebdcf-7eab-485b-8fbf-d21112c78946 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:52:12 compute-0 nova_compute[188777]: 2026-02-19 20:52:12.013 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Instance dff9d513-54f8-4d73-acf7-df610dc4d064 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 19 20:52:12 compute-0 nova_compute[188777]: 2026-02-19 20:52:12.013 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 19 20:52:12 compute-0 nova_compute[188777]: 2026-02-19 20:52:12.014 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 19 20:52:12 compute-0 nova_compute[188777]: 2026-02-19 20:52:12.112 188781 DEBUG nova.compute.provider_tree [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed in ProviderTree for provider: c266959e-952e-41ad-bc2e-56513f39ec2d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 19 20:52:12 compute-0 nova_compute[188777]: 2026-02-19 20:52:12.134 188781 DEBUG nova.scheduler.client.report [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Inventory has not changed for provider c266959e-952e-41ad-bc2e-56513f39ec2d based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 19 20:52:12 compute-0 nova_compute[188777]: 2026-02-19 20:52:12.137 188781 DEBUG nova.compute.resource_tracker [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 19 20:52:12 compute-0 nova_compute[188777]: 2026-02-19 20:52:12.138 188781 DEBUG oslo_concurrency.lockutils [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.224s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 19 20:52:13 compute-0 kernel: /proc/cgroups lists only v1 controllers, use cgroup.controllers of root cgroup for v2 info
Feb 19 20:52:13 compute-0 systemd[1]: Starting Hostname Service...
Feb 19 20:52:13 compute-0 systemd[1]: Started Hostname Service.
Feb 19 20:52:14 compute-0 nova_compute[188777]: 2026-02-19 20:52:14.140 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:52:14 compute-0 nova_compute[188777]: 2026-02-19 20:52:14.256 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:52:15 compute-0 nova_compute[188777]: 2026-02-19 20:52:15.261 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:52:15 compute-0 nova_compute[188777]: 2026-02-19 20:52:15.946 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:52:18 compute-0 nova_compute[188777]: 2026-02-19 20:52:18.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 19 20:52:19 compute-0 nova_compute[188777]: 2026-02-19 20:52:19.259 188781 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 19 20:52:19 compute-0 nova_compute[188777]: 2026-02-19 20:52:19.263 188781 DEBUG oslo_service.periodic_task [None req-03af8dbc-cf81-4070-900c-c7b3eb490ae4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
